moby / buildkit

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit
https://github.com/moby/moby/issues/34227
Apache License 2.0
8.01k stars 1.12k forks source link

Cannot give a local image to FROM when using docker-container #2343

Open bra-fsn opened 3 years ago

bra-fsn commented 3 years ago

I have a Dockerfile which references another image, which is stored locally (overrideable with a build arg):

ARG PARENT=ds_test_base:test
FROM ${PARENT}

The image is there:

$ docker image ls ds_test_base:test
REPOSITORY     TAG           IMAGE ID       CREATED       SIZE
ds_test_base   test   eb69be11f1e5   3 hours ago   10.4GB

When I try to build this Dockerfile with --builder xyz, which is backed by a docker-container driver, I get this:

#3 [internal] load metadata for docker.io/library/ds_test_base:test
#3 ERROR: pull access denied, repository does not exist or may require authorization: authorization status: 401: authorization failed
------
 > [internal] load metadata for docker.io/library/ds_test_base:test:
------
Dockerfile:2
--------------------
   1 |     ARG PARENT=ds_test_base:test
   2 | >>> FROM ${PARENT}
   3 |     
   4 |     MAINTAINER "Openmail"
--------------------
error: failed to solve: ds_test_base:test: pull access denied, repository does not exist or may require authorization: authorization status: 401: authorization failed

If I try to build it with exactly the same parameters, but omitting --builder xyz, it builds just fine.

$ docker version
Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:54:27 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:33 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

$ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 2
  Running: 2
  Paused: 0
  Stopped: 0
 Images: 27
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-81-generic
 Operating System: Ubuntu 20.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 30.58GiB
 Name: ip-10-150-29-76
 ID: ZJFE:DJF7:DUGP:WOGK:A66E:IVQS:X6HB:CKP4:SBRG:VROU:ZIBO:GWXF
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

$ docker image ls moby/buildkit
REPOSITORY      TAG               IMAGE ID       CREATED       SIZE
moby/buildkit   buildx-stable-1   2b537a02e2d9   6 weeks ago   144MB
thaJeztah commented 2 years ago

If I try to build it with exactly the same parameters, but omitting --builder xyz, it builds just fine.

I think this might be expected; if you're using a containerised builder, that builder uses its own cache for images, which is not shared with the dockerd image cache; when building, you'll probably see a warning about this;

WARN[0000] No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load

If you use the --load option, the image will be loaded into the docker daemon's local image cache (but this cache only supports a single architecture, so no multi-arch images can be stored there); I don't think it's possible currently to read back those images (for use as FROM, but I may be mistaken (@crazy-max ?)).

I wonder if it's possible though to make the container builder itself keep the image (for cases where the second build is running on the same builder: @crazy-max are the volume changes you made related to that?).

Otherwise, it's probably best to push your images to a registry (which would also address the multi-arch use-case, if you happen to be building a multi-arch image).

But, yes, I think the UX / experience could be improved somewhat here to make this easier to use (or more clear "how" to do this)

tonistiigi commented 2 years ago

I think the solution we're going with is to do this outside of buildkit like https://github.com/docker/buildx/issues/447

AlSummer commented 2 years ago

Is there any workaround for this? Attempting to build an image on an instance outside of my network (therefore it cannot access my custom registry) and I need to use a base image from that registry. I have copied the tar file for that image, but I cannot find a way to load that image into buildkit.

tonistiigi commented 2 years ago

https://github.com/docker/buildx/blob/v0.8.0/docs/reference/buildx_build.md#build-context https://github.com/docker/buildx/blob/v0.8.0/docs/reference/buildx_bake.md#defining-additional-build-contexts-and-linking-targets

deitch commented 2 years ago

FWIW, this is exactly what we raised in #2210 . builder-container has a regression in functionality compared to docker builder, at least as far as cache is concerned.

I had looked into the containerized builder works, and was surprised that it actually was missing only a few things to make it work. If I recall correctly:

You could do other things, like load and save equivalents, which would be helpful, but not, I believe, critically necessary.

I still know of a few people willing to help with it a bit, but it has been 9 months since we opened the issue.

deitch commented 2 years ago

@tonistiigi how does this help it?

deitch commented 2 years ago

As far as I can tell - please do correct me - the buildx bake context stuff allows you to, essentially, "alias" a FROM (or --from=) in a Dockerfile or other builder to one of a local directory, an image in a registry, or possibly the results of a previous stage (based on this).

It looks pretty hesitant to use that target reference:

Please note that in most cases you should just use a single multi-stage Dockerfile with multiple targets for similar behavior. This case is recommended when you have multiple Dockerfiles that can't be easily merged into one.

I don't understand how this manages to store the output of one to the other, especially with containerized builder. Is it just because it is building both at once, so the builder knows about all of the outputs and can "hold onto" them?

More important UX question: Does this mean that, in order to do the simple "use a a local cached image" that we have been used to for docker since day 1, where I do docker build -t somelocalimage:foo -f Dockerfile1 . and then consume it in another with FROM somelocalimage:foo, I now need to:

  1. have a bake.hcl in addition to my Dockerfiles
  2. learn the bake syntax
  3. actually have both of them there
  4. build them both at the same time so the bake file can reference them
  5. call a different command

all to replicate a "docker-simple" (TM) functionality from the existing flow?

tonistiigi commented 2 years ago

@deitch As you have been explained multiple times, nothing changes with the existing build behavior. If you want your builds to access Docker images you need to use the Docker driver. The same would be to set the requirement to build in k8s pods, but not do it using k8s driver.

More important UX question: Does this mean that,

Yes, when using Docker images as intermediate storage you needed to run multiple build commands with many flags, in a specific order, and then invent ways how the images created as a side effect of your builds get cleaned up so you don't run out of disk space. If you use named context you only have one command with no extra arguments that takes care of all of this. As the docs explain in most cases this isn't even needed and you should just use multi-stage builds. This old pattern was invented before multi-stage builds support was added and is only needed for cases where you have a requirement to build multiple projects that depend on each other and can't be merged(and you can't access any registry).

deitch commented 2 years ago

If you want your builds to access Docker images you need to use the Docker driver

Understood. But isn't the point of buildkit and the various drivers so that we can open new capabilities? buildkit does this phenomenally well, but some of those capabilities are not yet retrofitted into the docker driver. So you have to choose between:

I think we are trying to bridge this gap: if docker driver could do everything buildkit can (which it eventually will, I understand), and buildkit containerized could do everything docker could (again, within reason), then the choice between features Docker and features buildkit (and if you need both, you are out of luck) wouldn't be a problem.

As the docs explain in most cases this isn't even needed and you should just use multi-stage builds. This old pattern was invented before multi-stage builds support was added and is only needed for cases where you have a requirement to build multiple projects that depend on each other and can't be merged(and you can't access any registry).

Sure, and I built tons of images that way in the early days. And when I could move to multistage after it existed, I did.

But is it a fair assumption that every (or even a majority) of builds are simple linear "start at point A, go through some intermediates, get to point Z" and thus multistage candidates?

It is very common to have a base image (A), build some "golden base" (B), then some intermediates (C, D, E, F), then some finals (Q, R, S, T, U, V, W, X, Y, Z). This "tree" isn't a single build that can go in a multistage Dockerfile, and the intermediates aren't stored in a remote registry: they might be private, they might be part of a testing process and aren't valid to be stored until the finals (Q, R, S, T, U, V, W, X, Y, Z) are generated and fully tested (if at all). The local cache holds all of those interim images, and the later steps absolutely are not a single multistage build, simply cannot be. They cannot even be run at the same time (like bake with a bake.hcl).

  1. Build B from A.
  2. Build C, D, E, F from B
  3. Build Q, R, S, T, U, V, W, X, Y, Z from C, D, E, F

These are separate build processes, might be running at separate times. This isn't a single multi-stage Dockerfile (imagine what a nightmare that single file would be), and isn't run as a single step or stage in a CI process.

Let's be practical: someone has been doing this for years with docker build. It works great (it really does, hats off). They see buildkit, love what it can do, need those features that do not yet exist in the docker driver, so they try to go containerized.

And their entire build process breaks. Because step 1 stored output B, and step 2 cannot find B for C,D,E,F (repeat for step 3).

If you use named context you only have one command with no extra arguments that takes care of all of this.

That's just it. The above very common use case describes how "one command" isn't practical here. It has to be multiple commands. Yet docker build combined with FROM and local caches made those multiple commands dead simple to grasp and use. That was a key part of docker's adoption.

The part I am having a really hard time figuring out is, why the strong objection to proper caching inside buildkit container? You already store blobs, you already have the OCI layout, the number of additional steps to get to parity with default docker behaviour is so small (and people have offered to help build it). Why the resistance?

msg555 commented 2 years ago

I have the same sort of non-linear use case as @deitch describes and have found myself blocked by this, as an additional anecdote.

tonistiigi commented 2 years ago

The part I am having a really hard time figuring out is, why the strong objection to proper caching inside buildkit container? You already store blobs, you already have the OCI layout, the number of additional steps to get to parity with default docker behaviour is so small (and people have offered to help build it). Why the resistance?

BuildKit is not supposed to be a reimplementation of all the Docker features. Obviously, Docker has useful features, like an image store, and if you like it, you should continue to use it. BuildKit does not supersede the whole Docker engine but is scoped for a specific job: building from well-defined (immutable) sources to the expected build result.

If your problem statement is that you want to build some images, then take a break and later use the images you built in another build, then you need a place to keep them between your builds. There are plenty of options: docker image store, containerd image store, or any registry implementation. With named context we have made the registry case even simpler as you don't need to do any changes in Dockerfile (or use build args) if you want to switch between release images and local dev/staging registry image.

Defining an image store is fundamentally conflicting with builder design and BuildKit features. For example, BuildKit has an automatic garbage collector or smart pruning filters for managing storage. Users don't need to think about where their build are running, and everything can be moved to different infrastructure. There is support for multi-node build that splits build between multiple machines. All this would be impossible if there was some kind of name-based storage.

The user does not need to think that there is a special machine somewhere that contains their images, how they are named, how much storage they take, what happens if another version with the same name is built, what happens if one of them is deleted, what do you do after you run image ls and get 100 unnamed images as a result. If they want to think in these terms, then they should use a tool that is built for solving the image storage problem, not a builder.

deitch commented 2 years ago

This is a pretty good explanation @tonistiigi ; thanks for that. I will try to respond in detail.

BuildKit does not supersede the whole Docker engine but is scoped for a specific job: building from well-defined (immutable) sources to the expected build result.

Definitely. "Do one job and do it well". The challenge is that docker may have some features (local reusable image store), buildkit may others (cross-arch builds, remote contexts, etc.), and if you need to use both, you are stuck.

If your problem statement is that you want to build some images, then take a break and later use the images you built in another build, then you need a place to keep them between your builds

Agreed 100% there. You need somewhere to store them.

There are plenty of options: docker image store, containerd image store, or any registry implementation.

Well, not exactly. docker image store doesn't support anything other than local architecture (and expands the images), and both docker store and containerd store are not accessible to container-driver buildkit.

I think you have narrowed the problem down well; I am going to modify it somewhat to make it more explicit: if you want to build images using buildkit because of its features that docker does not have, and you want to "take a break" between builds, then you need somewhere to store those images that buildkit supports sourcing from. That list is pretty narrow. Actually, it is pretty much: network registry. And if your image does not or cannot go to the registry (e.g. during development lifecycle, when you are not yet ready to push, or an interim image that you never will push), what solution do you habve?

With named context we have made the registry case even simpler as you don't need to do any changes in Dockerfile (or use build args) if you want to switch between release images and local dev/staging registry image.

Yes, I see that. Takes a while to get it, but basically named contexts are "image aliases". Come to think of it, this would have been nice in docker from day 1.

Defining an image store is fundamentally conflicting with builder design and BuildKit features. For example, BuildKit has an automatic garbage collector or smart pruning filters for managing storage. Users don't need to think about where their build are running, and everything can be moved to different infrastructure. There is support for multi-node build that splits build between multiple machines. All this would be impossible if there was some kind of name-based storage.

This is a pretty solid argument. it doesn't eliminate the need to solve the problem, but it makes a solid case for why a local cache may not be the approach.

You basically are saying:

  1. Docker did build and image cache and pulling and running and and and ...
  2. buildkit is focused on building, and doing that well, so let's not mix it up with image storage (even if it does have some blob caching).
  3. If we need to solve image store other than in networked registry, let's solve it.

I can get behind it, but what is that solution?

So what is the right answer? Reading what you wrote, you almost want to be able to have a pluggable implementation of image aliasing, so I can use the following Dockerfile:

FROM myimage:1.2.3

and then run docker buildx build --alias myimage:1.2.3=driver://image where driver is different drivers: knows how to talk to networked registry (default), or networked registry while changing the name, or a local OCI file layout, or docker, etc. Something that lets it keep builds inside buildkit and image storage outside buildkit, without being so tightly tied to the same name in the networked registry.

Can named context actually do that? I think maybe some of it?

tonistiigi commented 2 years ago

and then run docker buildx build --alias myimage:1.2.3=driver://image where driver is different drivers:

You don't need a new flag. This is already supported. --build-context myimage:1.2.3=docker-image://image. The source can be a local directory, git repository, docker image, URL to tarball, or another build definition in bake for chaining. Adding another source that is path to OCI file layout does not conflict with the buildkit design.

deitch commented 2 years ago

You don't need a new flag. This is already supported.

Yeah, I wasn't suggesting a new flag. Just using language that was instantly understandable in our conceptual discussion.

The source can be a local directory, git repository, docker image, URL to tarball, or another build definition in bake for chaining

Yeah, I got that from your descriptions. Makes sense.

Is there anything like the current docker behaviour of, "look in this cache, and if not there, go pull it"?

Adding another source that is path to OCI file layout does not conflict with the buildkit design.

Meaning you would be open to it? I might be willing to submit a PR for that. Point me at the right place in buildkit where to hook it in.

The hard part, I think, still is the "generic idea". When I do a classic docker build, it knows how to parse each FROM line in the dockerfile, and then look in the local cache, going to registry if it fails. I don't need to tell it, "for image X, look here; for image Y, look there; etc.". I am not sure we can get that behaviour out of this, which means that the build command needs to know each and every image in the dockerfile and its source, as opposed to a generic docker-style. The current docker cache almost is a "read-through cache", where it only goes remote if you get a cache miss.

I am up for getting us to have --build-context myimage:1.2.3=oci-layout:///path/to/oci (we would need to figure out if we should point to the root manifest/index or the generic base of the OCI layout), but I would really like to have a --build-context *=oci-layout:///path/to/oci, where it just becomes another read-through cache: try in the OCI layout, then go network.

--build-context as it stands buys us a lot of flexibility for individual images, but now we pass the responsibility for knowing each image to the command-line. This is more complicated than the clean separation of docker build, where the CLI caller does not need to know each image in the dockerfile.

Do you see where I am driving?

felipecrs commented 2 years ago

Sorry, but why not being able to specify a --build-context-like syntax inside of the Dockerfile's FROM itself? Something like this:

FROM ../my-base-image

[...]
tonistiigi commented 2 years ago

Meaning you would be open to it?

Yes. Take a look at how the type=local cache import is implemented. It also loads OCI layout from client disk so the transfer parts should be reusable. First, it needs to be implemented in LLB, probably as extra property for the llb.Image. Then it can be plugging into Dockerfile and --build-context flag.

--build-context *=

That does not work well. First of all names are needed for the build-context mechanism exposed to frontends. Secondly, OCI has not even decided on a standard for naming multiple images within the OCI layout.

Sorry, but why not being able to specify a --build-context-like syntax inside of the Dockerfile's FROM itself?

I'm not sure what the example you provided is supposed to do but Dockerfile is a secure and portable build mechanism. It does not have permission to randomly start reading files from user's/host's disk.

felipecrs commented 2 years ago

I'm not sure what the example you provided is supposed to do but Dockerfile is a secure and portable build mechanism. It does not have permission to randomly start reading files from user's/host's disk.

It's supposed to do the same as:

FROM my-base-image

[...]
docker build --build-context my-base-image=../my-base-image .

And about the Dockerfile being secure and portable, this means that the buildx bake is the non-portable and non-secure counterpart to address the Dockerfile limitations, is that it?

tonistiigi commented 2 years ago

@felipecrs Bake has a different security model indeed. Atm. for local bake files it is allowed for them to point to any local path. For remote files, we have disabled parent access. In future releases, the plan is to move to https://github.com/docker/buildx/issues/179 to control what is allowed and what is not.

deitch commented 2 years ago

I'm not sure what the example you provided is supposed to do but Dockerfile is a secure and portable build mechanism. It does not have permission to randomly start reading files from user's/host's disk.

Yeah, I get what Tonis is saying here. the FROM line is always an image ref. The ability to turn that ref into something other than the actual registry is outside of the boundaries of the Dockerfile itself, i.e. the build engine that interprets it, otherwise Dockerfile wouldn't be portable and self-contained.

deitch commented 2 years ago

Secondly, OCI has not even decided on a standard for naming multiple images within the OCI layout.

Quite. I have generally used index.json (which is part of the OCI layout), but even there is disagreement on how to map the image name to the root blob. containerd uses their db, etc.

That doesn't change the question of specific name vs generic. It removes it entirely from buildkit's scope, actually. It wouldn't know or care; it would just pass it on to the driver.

That does not work well. First of all names are needed for the build-context mechanism exposed to frontends.

That isn't a fundamental product reason as to why it wouldn't be desired; it is a technical engineering reason why it wouldn't work with the current design.

I will try and put it in other terms.

While --build-context keeps the resolution of images inside buildkit, just using different aliasing drivers, what I think I am getting at here is blanket proxy.

I am aware that we could (almost) get to that by running a customized local registry as read-through cache, with perhaps some special logic that handles whatever aliasing is desired.

Of course, all of the above is much more complex than just, "I build something using buildkit, then build something else, and buildkit caches the results of the first".

deitch commented 2 years ago

While explaining this issue to some people, I realized the most basic use case why it matters, and why multi-staged docker files don't solve it, why bake and build contexts only partially do it.

Let's say that I am developing an image that normally is public, call it foo/myimage. Part of my build and test process is that there are downstream images, completely independent, that depend on foo/myimage, call them bar/image and bar/other.

A normal, sane, build process has me do the following:

  1. Do work on the source of foo/myimage
  2. docker build -t foo/myimage:testout .
  3. Do not push foo/myimage:testout, since I have absolutely no idea if that is the finally version until I work up the downstream dependencies
  4. Modify the Dockerfiles for the source of bar/image and bar/other
  5. docker build -t bar/image:something and docker build -t bar/other:else
  6. If all works, docker push foo/myimage:testout

The above is a great, simple chaining process, very Docker-ish, and 100% depends on foo/myimage:testout being available in some local read-through cache.

Even with build-contexts, how would I do my normal process? What can I give to docker build -t bar/image:something that will point it at local output? buildkit container-drive only understands a networked registry derived from the FROM image ref, and build contexts require that I explicitly give it the name (meaning yet another part of my build process I need to edit and manage, in addition to the Dockerfile), but it also has no "source" that I can point at the output of the first.

In theory, I could do a docker save on my first image, and then untar it, or maybe build the output with -o to a local file, but my process got much more complex, and if my testing succeeds, I need to rebuild it again in order to push it. Caching will make it more efficient, but yet another step to run.

Am I doing a decent enough job explaining how this breaks normal processes?

deitch commented 2 years ago

Oh and by the way, I totally get your argument that, "this is an optimized builder, not image manager/cacher; docker does both, this is just half of it." What I am trying to do is understand, "ok, what is the other half that complements it so we can get buildkit-build awesomeness with image caching"?

deitch commented 2 years ago

Thinking a bit more about (partial) solutions:

buildkit caches many blobs - but not image refs, for the reasons you described @tonistiigi - but only layers, not json blobs (config, manifest, index). What if it also cached those, which is trivial? The same way that buildkit currently, when it hits "I need blob X", it looks in local cache and, if it cannot find it, goes to the registry for the image currently being looked up, it could do the same for manifests and such.

Then the only thing left to do would be resolve image-> hash

That would allow something like:

build --build-context myimage:1.2.3=docker-image://myimage:1.2.3@sha256:abcdef11222

or even shorthand

build --build-context myimage:1.2.3=docker-image://@sha256:abcdef11222

Buildkit could:

  1. Hit myimage:1.2.3
  2. See that it has a context
  3. See that the context has a sha256 ref
  4. Try to use that (possibly cached) blob as the root manifest/index
  5. If it finds it, follow from there; if not, go to remote registry... just like every other blob it does today

It isn't perfect, but it gets rid of a lot of the headache without breaking the current buildkit model/mindset:

It still means the command-line needs to know a lot, but it is a big step forward while staying in the same context.

Thoughts @tonistiigi ?

felipecrs commented 2 years ago

It still means the command-line needs to know a lot

I'm not sure if I agree with this part. Because bake would be as simple as:

docker buildx bake image other

To build the two images dependent on myimage, by also building myimage in the process.

deitch commented 2 years ago

To build the two images dependent on myimage, by also building myimage in the process

Are you suggesting that the dev process becomes one that uses bake? So:

Still not quite the simplicity of "just have a local read-through cache", but something.

I think the part that throws me is that, I have to use a completely different command-line for the second image, depending on if my first image is locally built or not.

felipecrs commented 2 years ago

I think the part that throws me is that, I have to use a completely different command-line for the second image, depending on if my first image is locally built or not.

I would just rather use bake for both cases, including "building from an image that is publicly knwon".

I have to use a completely different command-line for the second image, depending on if my first image is locally built or not.

Well, that will be defined in your Dockerfile, or at most, in docker-bake.hcl (in case you want to chain two Dockerfiles). So that, the command-line call will be always the simple:

docker buildx build bake my-second-image
felipecrs commented 2 years ago

And to support both cases in the same docker-bake.hcl, the command-line call could vary:

# docker-bake.hcl

target "base" {
    dockerfile = "baseapp.Dockerfile"
}

target "app" {
    contexts = {
        baseapp = "target:base"
    }
}

Building app, also building base

docker buildx bake app

Building app, using previously pushed base

docker buildx bake app --set=app.contexts.baseapp=docker-image://my-previously-pushed-base
deitch commented 2 years ago

@felipecrs I am starting to see where you are going.

I am curious how bake even knows about the first image when it doesn't store things? Ah, I am guessing that it caches the root manifest of the first (but not the image name)? That almost inches towards what I was suggesting above, that we be able to have buildkit always cache json (config, manifests, indexes) in addition to binary blobs, and be able to to --build-context baseapp=@sha256:abcdef123456.

In any case, how do I get from where you are going to an actual flow?

It kind of reverses what you did. Is there a way to use the CLI to set the context with --set to the output of a previous build step, or is that only in the bake file?

For that matter, I could even do something like:

It still grates on me a bit that I cannot give me users the seamless experience they were used to. Well, I do wrap the build, so maybe there is room there.

deitch commented 2 years ago

One other question: does that caching work with output formats? When I build, I need the image in OCI format, so I run build -o type=oci

deitch commented 2 years ago

Oh wait, I am not so sure it will work.

How would bake handle it if they are in 2 completely different directories?

|- A
    |- Dockerfile
    |- srcs/
|- B
    |- Dockerfile
    |- srcs/
felipecrs commented 2 years ago

How would bake handle it if they are in 2 completely different directories?

There are no restrictions on where the context for bake targets will be.

deitch commented 2 years ago

So I would need to craft some custom process that knows, "if I am building an image, just build it; if I am building an image and then downstream dependencies, build it and its downstream dependencies with bake, then if all is good, go back to the usual process." Kind of cumbersome, but I see how it can work.

I guess with the blob caching, I could actually do:

  1. docker buildx build A - this will build A and save layers
  2. docker buildx bake B (where the bakefile in B references the A) - this might rebuild A, but since buildkit already caches the layers, it should be pretty quick.
  3. when ready push out A
  4. when ready rebuild B based on the public A, and push it out

Still very cumbersome. I think I can make it work. But you can see how much easier a "buildkit that supports some read-through cache" is dramatically simpler, and even buildkit which caches all layers, just not the image->manifest mapping, is much easier, as I can just do buildkit build --build-context A=@sha256:abcdef12345.

deitch commented 2 years ago

Yes. Take a look at how the type=local cache import is implemented

@tonistiigi, started it. Wrapping my head around the various imports and stuff is not easy, but getting there (slowly).

Separately, would we be open to the other part discussed? buildkit only caches layers right now, not manifests, config or indexes (which are sha256 blobs anyways). I had suggested caching those too, subject to the same garbage collection as the layers, and then we could have an option for:

--build-context A:1.2.3=docker-image://A:1.2.3@sha256:abcdef12345

(or similar), where we provide the mapping?

amrmahdi commented 2 years ago

@deitch Have you considered using buildkit withe containerd worker? With containerd worker you can referenced a locally using FROM as you would do previously with docker.

Bidski commented 1 year ago

I have come across this issue with VS Code devcontainers.

I have an image that I have built using docker buildx build --output=type=docker -t myuser/myimage:test ..... and I have created a .devcontainer.json

{
    "image": "myuser/myimage:test",
    ....
}

When spinning up the devcontainer VS Code creates a Dockerfile that looks like this

ARG _DEV_CONTAINERS_BASE_IMAGE=placeholder

FROM $_DEV_CONTAINERS_BASE_IMAGE AS dev_containers_target_stage
#LABEL devcontainer.metadata="{.....}"

And attempts to build it as

docker buildx build --load --build-arg _DEV_CONTAINERS_BASE_IMAGE=myuser/myimage:test --target dev_containers_target_stage -t vsc-blah-a29dfd4939d9803ec8dc84acb67f0541-features -f /path/to/Dockerfile /path/to/empty-folder

which results in ERROR: failed to solve: myuser/myimage:test: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed, even though docker image ls lists myuser/myimage:test.

If I instead run docker build --build-arg _DEV_CONTAINERS_BASE_IMAGE=myuser/myimage:test --target dev_containers_target_stage -t vsc-blah-a29dfd4939d9803ec8dc84acb67f0541-features -f /path/to/Dockerfile /path/to/empty-folder, the image is built successfully. However, I have to run this command manually (I know of no options to make VS Code use docker build over docker buildx).

This is obviously a case where I cant use a multi-stage Dockerfile (I don't even own the VS Code Dockerfile, from what I can tell it is automatically generated by VS Code) and the image I have built will never be pushed to any registry public or private (it is only intended to be used/accessed on my local machine).

What options are available to me here?

thaJeztah commented 1 year ago

What options are available to me here?

If you switch to the default builder (docker buildx use default), both docker buildx build and docker build should uses the same storage, in which case you can refer to the image.

Bidski commented 1 year ago

@thaJeztah thank you! that worked perfectly!

ryanpennellblacksage commented 1 year ago

https://github.com/docker/buildx/blob/v0.8.0/docs/reference/buildx_build.md#build-context https://github.com/docker/buildx/blob/v0.8.0/docs/reference/buildx_bake.md#defining-additional-build-contexts-and-linking-targets

I'm sorry, I'm still very confused for the solution. I've been working on this for several days and have reviewed all issues I've found.

I'm attempting to do a containerized build using docker with the docker:dind service running I have /output/myimage.tar (which is the output of docker save) and need to run docker build Dockerfile .

Dockerfile:

FROM myimage:latest
...

is the solution to run the following?: docker build --build-context myimage:latest=file:///output.tar I've tried many variations to no avail. Any help would be appreciated

Edit: So the best I got working was during the base build, using the --output flag for oci data and then loading the oci data as the additional build context. Is there a cleaner alternative that directly uses the tar file and doesn't disable buildkit?

harshu1470 commented 10 months ago

I think this is the issue with Docker itself because it uses registry to pull images and locally stored images are not in some registry, So possible solution is push your images to your repository and then login to your registry via local.

kong62 commented 10 months ago

where middle layer for debug?where FROM base image?

# docker build -t test:v1.0 .
[+] Building 1.2s (9/9) FINISHED                                                                                                                                                    
 => [internal] load .dockerignore                                                                                                                                              0.0s
 => => transferring context: 2B                                                                                                                                                0.0s
 => [internal] load build definition from Dockerfile                                                                                                                           0.0s
 => => transferring dockerfile: 159B                                                                                                                                           0.0s
 => [internal] load metadata for harbor.sit.hupu.io/base/centos:v7.9                                                                                                           0.2s
 => [auth] base/centos:pull token for harbor.sit.hupu.io                                                                                                                       0.0s
 => [1/4] FROM harbor.sit.hupu.io/base/centos:v7.9@sha256:3c61367dee41a708b6bbbf1db2001eab9d073060b0f1bd295d9421b6217f1578                                                     0.0s
 => [internal] load build context                                                                                                                                              0.0s
 => => transferring context: 30B                                                                                                                                               0.0s
 => CACHED [2/4] RUN yum install make -y                                                                                                                                       0.0s
 => CACHED [3/4] COPY hello.txt /opt/hello.txt                                                                                                                                 0.0s
 => ERROR [4/4] RUN yum install xxxx -y                                                                                                                                        1.0s
------                                                                                                                                                                              
 > [4/4] RUN yum install xxxx -y:                                                                                                                                                   
#0 0.542 Loaded plugins: fastestmirror, ovl                                                                                                                                         
#0 0.570 Loading mirror speeds from cached hostfile                                                                                                                                 
#0 0.678 No package xxxx available.
#0 0.701 Error: Nothing to do
------
Dockerfile:7
--------------------
   5 |     COPY hello.txt /opt/hello.txt
   6 |     
   7 | >>> RUN yum install xxxx -y
   8 |     
--------------------
ERROR: failed to solve: process "/bin/sh -c yum install xxxx -y" did not complete successfully: exit code: 1
# docker images -a
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
samrocketman commented 8 months ago

Add me to the pile of casualties affected by this issue.

I support multiarch devcontainers and after upgrading docker-compose (in VSCode devcontainers) can no longer reference locally built images to build FROM and instead tries to get the local images failing on

https://registry-1.docker.io/v2/library/my-custom-local-only-dockerd-image/manifests/latest

This feels like a regression and not a feature that was introduced. Pretty frustrating as I'm blocked and am looking through these comments for a workaround.

thaJeztah commented 8 months ago

If you are on a recent version of Docker Desktop or the Docker Engine, you can enable the containerd image store integration. That provides a multi-arch store without requiring a docker-container builder. It's still in preview, but work is progressing and will be moved out of "experimental" in the not too distant future; see https://docs.docker.com/storage/containerd/

Make sure to switch the builder instance back to the default (not the container-builder, otherwise it continues to use the docker-container driver)

sgleske-ias commented 8 months ago

@thaJeztah I think I found the root cause of my specific issue. VSCode Dev Containers extension hard codes BUILDKIT_INLINE_CACHE=1.

I have commented on their code review; I seem to have been confused (it's a complex stack)

ref https://github.com/devcontainers/cli/pull/38#discussion_r1435270605

samrocketman commented 8 months ago

@thaJeztah actually, I think I ruled out dev containers after all. I'll describe more of my setup.

My compose file looks like:

version: '2.4'
services:
  some-service:
    build:
      context: .
      dockerfile: ../../shared/Dockerfile.compose
      args:
        BASE_DOCKER_IMAGE: ${MY_DOCKER_IMAGE}
        PLATFORM: ${PLATFORM}
#... etc

I have a .env file with the contents

MY_DOCKER_IMAGE=my-custom-image-arm64
PLATFORM=arm64

And the ../../shared/Dockerfile.compose looks like:

ARG BASE_DOCKER_IMAGE
ARG PLATFORM
FROM --platform=linux/${PLATFORM} ${BASE_DOCKER_IMAGE}

When I run:

docker-compose --project-name my-project -f /path-to/docker-compose.yml build

I get the error

failed to solve: my-custom-image-arm64: failed to do request: Head "https://registry-1.docker.io/v2/library/my-custom-image-arm64/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority

Locally, if I run docker images | grep my-custom-image-arm64 the my-custom-image-arm64:latest exists. It's been about a year since I last tried this project and it used to work. (side notes: I do have a corporate MITM proxy and the corp does pay a license to Docker for my desktop client).

I can do things like build docker compose with non-local images and images hosted in a corporate docker image store. However, I don't appear to docker-compose build with any local image if the Dockerfile being built references a local image.

samrocketman commented 8 months ago

BuildKit image building used to work with locally built docker images and apparently no longer does so due to cache. How do I get support for multi-arch images and referencing local images when building from docker-compose? This definitely used to work and currently does not.

samrocketman commented 8 months ago

I found my own workaround: I ran the following CLI command:

docker context use default

And now it works again. Thanks to https://stackoverflow.com/questions/20481225/how-can-i-use-a-local-image-as-the-base-image-with-a-dockerfile

samrocketman commented 8 months ago

The root cause of my issue is likely self-induced.

I was researching multi-arch images at some point and I'm pretty sure that I switched my buildx context. By doing that, I probably used the docker-container driver blindly following instructions. Switching the context back to default resolved it and conversation here gave me some hints along with the stack overflow post.

Nothing to see here...

BearTS commented 4 months ago

I found my own workaround: I ran the following CLI command:

docker context use default

And now it works again. Thanks to https://stackoverflow.com/questions/20481225/how-can-i-use-a-local-image-as-the-base-image-with-a-dockerfile

that workaround does work for me too

superstator commented 1 month ago

Running into this as well, and while using --build-context sounds plausible it doesn't actually seem to work as advertised. I created a very minimal repro here:

https://github.com/superstator/buildx-issue

tonistiigi commented 1 month ago

@superstator --build-context foo=docker-image://foo in your example is meaningless. Without the context, foo would also point to the image with same name. If the image does not exist in a registry you need to pull the definition from some other source when initializing the build context, eg. oci-layout or another buildx bake target.