moby / buildkit

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit
https://github.com/moby/moby/issues/34227
Apache License 2.0
8.03k stars 1.12k forks source link

Unable to use Buildkit with Windows containers #616

Open tofflos opened 6 years ago

tofflos commented 6 years ago

I'm using the Buildkit version that comes bundled with Docker for Windows 18.06.1 and am experiencing some trouble running it with Windows containers. In the log below you can see a build succeed for a very simple build running without Buildkit and then failing once I enable it. The localized error message "Det går inte att hitta filen" roughly translates to "Unable to find the file". I've had success running Buildkit on the same system when running Linux containers. A minimal project that reproduces the error can be found here test.zip.

PS C:\test> docker version
Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:21:34 2018
 OS/Arch:           windows/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.24)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:36:40 2018
  OS/Arch:          windows/amd64
  Experimental:     true
PS C:\test> ls

    Directory: C:\test

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       2018-09-11     15:38             74 Dockerfile
-a----       2018-09-11     15:39             23 test.txt

PS C:\test> type .\Dockerfile
FROM microsoft/nanoserver:1803
COPY test.txt /test.txt
RUN type test.txt

PS C:\test> $Env:DOCKER_BUILDKIT=0
PS C:\test> docker build -t test .
Sending build context to Docker daemon  3.072kB
Step 1/3 : FROM microsoft/nanoserver:1803
 ---> 693ff1719e39
Step 2/3 : COPY test.txt /test.txt
 ---> 3cb8bc9e5e2e
Step 3/3 : RUN type test.txt
 ---> Running in 376f873629fd
This is a test message!Removing intermediate container 376f873629fd
 ---> 0cce47564a2d
Successfully built 0cce47564a2d
Successfully tagged test:latest

PS C:\test> $Env:DOCKER_BUILDKIT=1
PS C:\test> docker build -t test .
[+] Building 0.2s (2/2) FINISHED
 => local://dockerfile (Dockerfile)                                                                                                                                                                                                                                       0.1s
 => => transferring dockerfile: 31B                                                                                                                                                                                                                                       0.0s
 => local://context (.dockerignore)                                                                                                                                                                                                                                       0.1s
 => => transferring context: 2B                                                                                                                                                                                                                                           0.0s
failed to read dockerfile: open C:\ProgramData\Docker\tmp\buildkit-mount977689469\Dockerfile: Det går inte att hitta filen.
TBBle commented 3 years ago

Yeah, the Windows Containers filesystem performance particularly when importing/exporting layers isn't wonderful, due to a number of hoops it has to jump through between Windows and OCI formats. That's nothing to do with BuildKit though, that's generally Docker Engine or Containerd's responsibility, and both are mostly limited by the underlying systems.

One often-overlooked trick is if you have unpigz.exe in the daemon's path, e.g., from https://blog.kowalczyk.info/software/pigz-for-windows.html, decompression of large layers will be a lot faster.

TBBle commented 3 years ago

Just came across a Linux-containers-specific codepath we'd hit with WCOW in the containerd executor, for handling non-default USERs.

In summary, executor/containerd/executor.go does a bunch of uid/gid handling, which culminates in oci.WithUIDGID or a specs.User with UID and GID populated, and that's Linux-container specific. Windows containers need to use oci.WithUsername or populate Username in specs.User respectively.

It looks like we might be able to use containerdoci.WithUser and containerdoci.WithAdditionalGIDs to replace the existing code used by the containerd executor, as that will work cross-platform. But I haven't looked to see if there's going to be problems with deferring mounting the container root to later (i.e. once we know if it's Linux or Windows), or if there's a difference in the mounting that will affect the Linux flow. I also think we'd lose some shortcut optimisations

Either way, we need to change the function called by the containerd executor from Username->uid/gid/sgids and then uid/gid/sgids->SpecOptsto Username->SpecOpts, i.e. the same as containerdoci.WithUser and containerdoci.WithAdditionalGIDs even if those methods themselves aren't used, but the current behaviours moved into a SpecOpts implementation. This is because uid/gid/sgids is target-platform specific, and containerd executor is target-platform agnostic.

See https://github.com/opencontainers/runc/pull/2695#discussion_r533528172 for the details of the oci-side behaviour.

Edit: Turns out, this affects all runners, not just non-default USER. It must be newer than my last set of tests, I have had to disable this in my hacks_ahoy branch to get back to the working-state level of last time I tested WCOW support.

TBBle commented 3 years ago

Since my last-working state, #1560 set a default path for WCOW containers to

C:\Windows\system32;C:\Windows

which is correct for nanoserver but incorrect for servercore, which has default path

C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Users\ContainerAdministrator\AppData\Local\Microsoft\WindowsApps

The upshot of this is that powershell in Dockerfile stops working on servercore, which is clearly a regression.

I don't really understand the rationale for setting the default PATH here at all, surely the default PATH is up to the image, not the runner, as it seems injecting the default path will just produce an invisible dependency on the runner, i.e. here we have a clear difference between docker build and buildctl build behaviour.

I think that the default path should be being copied from the FROM image, not hard-coded to some value. Surely the current behaviour would also mess with Linux containers with a parent that set a non-default PATH env in their config?

Checking the default path in the images ``` > ctr --namespace buildkit run --rm mcr.microsoft.com/windows/nanoserver:20H2 tm1 cmd /c echo %PATH% C:\Windows\system32;C:\Windows; > ctr --namespace buildkit run --rm mcr.microsoft.com/windows/servercore:20H2 tm1 cmd /c echo %PATH% C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Users\ContainerAdministrator\AppData\Local\Microsoft\WindowsApps ``` I'm not sure if the default path depends on the user, since `ctr` does not provide a command-line argument to change the user being run as.
Nanoserver does not set a default path in its config I assume something in the Windows container startup sets up a `PATH` if it's not already set. ``` > docker inspect mcr.microsoft.com/windows/nanoserver:20H2 [ { "Id": "sha256:ce974158e62eafec1bcb376762d115ffb840664814756295e9e7d4f141b91309", "RepoTags": [ "mcr.microsoft.com/windows/nanoserver:20H2" ], "RepoDigests": [ "mcr.microsoft.com/windows/nanoserver@sha256:6bf7921dfed9214b5bae102fc82e5ef9a311116766581c1890cb9da16bd18ec9" ], "Parent": "", "Comment": "", "Created": "2020-12-03T07:14:09.1214501Z", "Container": "", "ContainerConfig": { "Hostname": "", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": null, "Cmd": null, "Image": "", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": null }, "DockerVersion": "", "Author": "", "Config": { "Hostname": "", "Domainname": "", "User": "ContainerUser", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": null, "Cmd": [ "c:\\windows\\system32\\cmd.exe" ], "Image": "", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": null }, "Architecture": "amd64", "Os": "windows", "OsVersion": "10.0.19042.685", "Size": 263144620, "VirtualSize": 263144620, "GraphDriver": { "Data": { "dir": "C:\\ProgramData\\Docker\\windowsfilter\\66cfc149bd5a24aa7fd6472ea9e0dbb7f738d4a5b0f6d5039e9ae342a64e90d1" }, "Name": "windowsfilter" }, "RootFS": { "Type": "layers", "Layers": [ "sha256:eba88ad2c1a09b2ffe97926373ea69286d7a075d440f6ce82a08429aa5d3e89d" ] }, "Metadata": { "LastTagTime": "0001-01-01T00:00:00Z" } } ] ```
lippertmarkus commented 3 years ago

@TBBle again, thanks a lot for looking into this!

TBBle commented 2 years ago

So I can find it later (when I get containerd into the state I want it to be, and come back to this), I just discovered that containerd includes a (bash!) script to setup CNI on Windows, including creating a default nat network, at https://github.com/containerd/containerd/blob/main/script/setup/install-cni-windows.

There's a PowerShell adaption embedded in https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/scripts/Install-Containerd.ps1, but it downloads containerd and the CNI plugins into weird hard-coded paths, as it's intended for machines that will be nothing but k8s worker nodes.

But still, either script will be useful to see how the nat CNI Plugin is supposed to be used, compared to my earlier guessing efforts based only on the source.

lippertmarkus commented 2 years ago

@TBBle If you still need help in setting up a Windows node with containerd and nat networking, I'm happy to support.

TBBle commented 2 years ago

Thank you. I have that part working, or had last time I tried it.

My current stall is because I'm waiting for https://github.com/Microsoft/hcsshim/pull/901 to land so I can (hopefully) land https://github.com/containerd/containerd/pull/4419 once I work out where and how to avoid the Server 2019 breakage in the snapshotter tests, so that I can then have a working containerd that's usable for CI of BuildKit, so that I can work on Buildkit without having to worry about changes and fixes being reverted or subverted because there's no CI for Windows with containerd.

The actual time I spend on this (and it's infrequent recently) has been trying to reproduce the breakage in https://github.com/containerd/containerd/pull/4419 directly in https://github.com/Microsoft/hcsshim (since I'm 90% sure it's an OS-level thing, not the containerd code), and/or find a working workaround. I actually haven't yet (but need to) investigate if Windows Server 2022 shows the same issue, because if not, then at least it's not a going-forward blocker.

nikelborm commented 1 year ago

Hi, guys! Are you all talking about docker installed in WSL? Or about some other way of installation docker on windows?

TBBle commented 1 year ago

This is about Windows Containers, not Linux containers under WSL. So either Docker Desktop in Windows Containers mode, some other Docker installation on Windows, or BuildKit + Containerd without Docker.

The approach I was taking, and that is AFAIK still the chosen approach, is to get it working for the latter case (BuildKit + Containerd) first, and then later it can be delivered into Docker via buildx's BuildKit integration, which will hopefully line-up nicely with both Docker's containerd-on-Windows support and Docker's containerd content-store support to avoid a bunch of duplicated/forked implementations and "just work".

rafagsiqueira commented 1 year ago

@TBBle would also be nice to be able to build with buildx + kubenretes driver on kubernetes windows nodes. Would that work too?

TBBle commented 1 year ago

That depends if any limitations appear for running the buildkitd in a Windows Container.

If it can be run in a container then the docker-container and kubernetes drivers will both be feasible. If it turns out it needs to be run in a Host Process container (i.e. we can't use "rootless"-mode), then either we'd need a new buildx driver, or the existing container drivers would need to have Host Process container support added. (I'm not sure if Docker supports host-process containers at all, since AFAIK they were mostly driven by a Kubernetes use-case. It would depend on containerd-on-Windows support anyway, since Host Process containers aren't present in hcs v1.)

All my earlier experimentation was with an uncontained buildkitd (running as a daemon on the host OS) using buildctl (equivalent to the Remote driver) and I expect that the Docker driver would work the same way, but honestly haven't looked closely at buildx, and particularly I haven't looked at how the docker-container and kubernetes drivers manage their owned buildkitd instances and associated containerd.

rafagsiqueira commented 1 year ago

I will do some testing when I can and try to figure out the details.

gaui commented 1 year ago

This mentions Windows containers, but is it possible to run buildkitd on a Windows VM?

TBBle commented 1 year ago

Not usefully, no. Like, it'll run (or it used to, I haven't tried in a while) but can't actually do most of its job. Apart from the brief discussion in February, this is about building Windows Containers using buildkit at all, and getting it working on bare metal/VMs is the first target, and would be enough to close this issue IMHO.

The February discussion about containerised buildkitd would be a further feature (if it doesn't happen to work out-of-the-box).

clarenceb commented 1 year ago

This is being worked on here: https://github.com/microsoft/Windows-Containers/issues/34 and it seems there is some potential solution which is awaiting review (https://github.com/moby/buildkit/pulls/gabriel-samfira).

In the meantime, in that same issue, it's mentioned that you can build Windows images already as long as you don't include RUN statements, see: https://github.com/microsoft/Windows-Containers/issues/34#issuecomment-653215478

Though not using Dockerfiles or docker buildx, I did a quick PoC a while ago with Crane. It lets you assemble a Window container image on Linux containers but you need to have the EXEs/DLLs prebuilt on a Windows machine (similar to the approach above), see: https://gist.github.com/clarenceb/269c8bc69ea47b0022a34605844b531b

TBBle commented 10 months ago

So with #3518 landed 🎉, @gabriel-samfira 's list of existing patches-to-land appears complete, and with one minor fix (#4364), I was able to build a reasonably trivial Dockerfile using BuildKit master branch and containerd 1.7.7, and then execute it with nerdctl 1.6.7 (which also turns out to need some fixes, but they're only cosmetic). (Edit: I just remembered that I also had updated the Windows CNI plugins to support CNI 1.0.0 for nerdctl, but that was probably also necessary for BuildKit; I got nerdctl working before I started with buildkit)

A super simple Dockerfile ```Dockerfile FROM mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 LABEL Description="Built with BuildKit!" WORKDIR C:/ SHELL ["C:\\Program Files\\PowerShell\\pwsh.exe", "-command"] ENTRYPOINT ["C:\\Program Files\\PowerShell\\pwsh.exe"] RUN dir C:/ RUN echo 'Write-Host -ForegroundColor DarkGreen Hello World' > C:/wr.ps1 RUN echo 'Write-Host -ForegroundColor DarkBlue Hello World' > C:/wrblue.ps1 CMD ["-command", "C:/wr.ps1"] ``` ```console buildctl build --frontend dockerfile.v0 --local context=. --local dockerfile=. --output type=image,name=docker.io/tbble/supersimpleDocker nerdctl run -it --rm docker.io/tbble/supersimpleDocker nerdctl run -it --rm docker.io/tbble/supersimpleDocker -command C:/wrblue.ps1 ```

There's still a few rough spots that need fixing:

While I was at it, I got nerdctl build trivially working. Mostly just removing the "not supported on Windows" check: https://github.com/containerd/nerdctl/pull/2587


I was wondering if it was time to start running-up the integration test suite on Windows. However, I noticed that the existing integration test suite relies on running inside a container, and that's probably a non-starter for Windows unless we want to blow through into Host Process mode. (Which is actually probably the right long-term approach, but I don't know if that works with Docker, or on the GHA hosted Windows runners.)

Potentially, running buildkitd and containerd outside a container, and buildctl, nerdctl, and the rest of the test suite-used utilities inside a container would be doable. I have a vague recollection that mapping named pipes into a Windows container should work, and I don't think buildctl relies on mounting container images locally or anything else that would be hard inside a regular WCOW container.

We also have a bit of a circular dependency for running up the integration-test image anyway, since it relies on BuildKit (and the build process relies on buildx and bake as well) so I believe we're going to have to (temporarily, at least) have a separate parallel GHA test setup and a separate Dockerfile.windows to build the equivalent of the integration-test container image.

gabriel-samfira commented 10 months ago

However, I noticed that the existing integration test suite relies on running inside a container, and that's probably a non-starter for Windows unless we want to blow through into Host Process mode.

While the clean way would be to use a container for the tests, I believe the only big reason a container is used, is because it makes the test setup process easier, in the sense that the container has all the binarues and settings required for the test suite to run. Othetwise go test should run integration tests as well if the SKIP_INTEGRATION_TESTS env variable is not set

We could set those up in a pre-test-setup workflow step. That should take care of the CI, but it will be a pain for whomever tries to run the tests locally. A disposable VM for tests is advised until we have all the tooling in place to run things in a container.

Now that the master branch can build images (with the addition of your PR), we unblocked the rest of the ecosystem, and fixes can be done in parallel in nerdctl, buildx, etc.

gabriel-samfira commented 10 months ago

Potentially, running buildkitd and containerd outside a container, and buildctl, nerdctl, and the rest of the test suite-used utilities inside a container would be doable.

That might not work due to how buildkit uses the local mounter. Unless we also mount c:/ProgramData/containerd folder and the $env:TMP folder in the container, but at that point we might as well go with the host process container option.

We have the flexibility to mangle the github runner whichever way we need before running the test. We can even set up a pristine Windows VM on azure and set it up, like we do in containerd.

TBBle commented 10 months ago

Ah, is the local mounting done by buildctl, not buildkitd? I knew I should have checked that first. Ah well. Yeah, we'll probably have to start with it completely non-containerised in a distinct pipeline with an eye to migrating to HostProcess someday, rather than trying to split the difference.

Even HostProcess containers may not be able to sufficiently-unify the pipelines to make it worth doing. I actually don't recall having heard of anyone running a second containerd in a HostProcess container.

And yeah, the next step is pushing forward into buildx, which will be the point where this can start getting into the hands of a wider group of users. Once we have Docker Desktop for Windows backed by containerd and containerd image store (both exist in the moby repo with CI support-ish but are not yet shipped in Docker Desktop), AFAIK users can just update buildx in-situ to pull improvements and fixes, which is much more reliable than getting the full stack working by hand.

gabriel-samfira commented 10 months ago

Ah, is the local mounting done by buildctl, not buildkitd?

I think it is done by buildkitd, but the issue is a mixed bag. I am not (yet) confident that the tests won't check for the existence of files in the path where layers get mounted (see continuity/fstest). Starting next week we will finally have time to start tackling the test suite.

TBBle commented 10 months ago

Ah, good point. The tests would be doing the mounting themselves so they don't need to see the exact same filesytem as buildkitd, but need access to the same containerd backing store. I see now. That'd probably also be true on Linux in the same situation, i.e. we were trying to run tests in a non-privileged container talking to buildkitd/containerd in a separate container.

tonistiigi commented 10 months ago

I am not (yet) confident that the tests won't check for the existence of files in the path where layers get mounted

They do not afaik, the data is exported with --output to registry/local/containerd and then checked.


Could someone explain in more detail what are the actual technical problems of not being able to run buildkitd inside the containers. This isn't just for tests but I also want it to be possible to run buildx create to run any upstream release of buildkit as an isolated instance. In linux, by default this means making a buildkit container. I'm also not sure atm if the frontend containers work in wcow or not. That one is a slightly different problem though as frontends do not require any extra privileges.

gabriel-samfira commented 10 months ago

Could someone explain in more detail what are the actual technical problems of not being able to run buildkitd inside the containers.

We can't say for sure. I can't, at least. Not until we actually try it. At this point we're just guessing based on previous experience in other parts of the ecosystem. My hope is that it will work. If not in process containers, then at least in hyper-v vontainers.

We'll know more in the following weeks, and will add more relevant details and/or PRs to enable tests as well as the rest of the ecosystem tooling. The aim is to be as close as possible to the linux version in terms of UX.

TBBle commented 10 months ago

The main limitation is that we can't run "privileged" containers on Windows except Host Process containers (with which my personal experience is basically zero, and AFAIK neither Docker nor nerdctl support them, so I don't know what is needed to make them aesthetic for non-k8s situations) and I suspect therefore that we can't run a containerd instance in a container.

I also suspect that we might not be able to use the localmounter from inside a container, even if the containerd data tree is mapped into the container, as the same non-privileged state means we may not be able to actually mount inside the container using WCIFS. See https://github.com/microsoft/Windows-Containers/issues/268 for a related known limitation. (This might not apply to WCIFS...) https://github.com/microsoft/hcsshim/issues/1699 notes that a related issue also affects Host Process containers. But neither is exactly what we'd be trying to do there.

But as @gabriel-samfira has noted, this is still speculation. I'm not aware of anyone having tried this explicitly.

I expect frontend containers aren't affected by these limitations, but I've not looked at all into how they work, so I allow room to be surprised.

gabriel-samfira commented 10 months ago

CC-in from the Microsoft side for greater visibility @lucillex @profnandaa @iankingori

TBBle commented 10 months ago

Since the core of the system is roughly working in master, and AFAIK all the upstream dependencies have released versions we can use, we probably should set some goals for closing this ticket and tracking remaining work that needs further discussion new tickets.

First question, do we want to keep this ticket around as a meta-tracking ticket? I suspect a lot of people are subscribed and would see this ticket closing as "It works". It makes sense to me to keep using this ticket to track until the feature is release-notable.

I'd love to see WCOW land as supported in 0.13, but feel #3158 and the Platform Matcher issue for Windows 11 should be resolved first, as they represent regressions from the legacy builder in dockerd for common existing Dockerfile patterns. We also need test suite coverage, to identify any other regressions.

It just occurred to me that it might be worth collecting a list of large WCOW-based containers and do some test-builds with them to shake out any other regressions. Since I have history with it, ue4-docker comes immediately to mind. I don't think my own machine is strong enough for it. (It is probably also going to be bitten by #3158, since it uses RUN powershell frequently.) Core MS tools like PowerShell-Docker and dotnet-docker.

We probably should test and document the state of HyperV Isolation support. It'll be interesting for people on Windows 11 and Windows Server 2022 hosts to build Windows Server 2019 containers, but whether those people are numerous enough to make it a release goal, I'm unsure. dotnet-docker is also a test-case for this, they appear to still support LTSC2019 and I think we don't plan to support Windows Server LTSC 2019 as host for buildkitd. (But now I'm questioning that, did I confuse it with Docker 24? Or with LTSC2016 support?)

I don't think LCOW is a release goal here. Although it might be easier to get the test suite running on that, there's probably a bunch of things that are making WCOW assumptions in BuildKit, e.g., the Platform Matcher. And similarly, multi-platform build support probably isn't interesting right now. (WCOW/LCOW would be doable once we have LCOW, I'm not sure if multi-architecture on top of that would be fun to implement, it'd probably be QEMU inside LCOW for Linux, and multi-arch Windows Containers is simultaneously ancient history and unknowable future)

I'm not sure what we'd need in terms of documentation. Presumably documentation of the various Windows-specific limitations is the bare minimum.

And then there's trivial stuff like moving the buildkitd binary into the binaries image and anything else needed to make the released artifact usable. (I hope we don't need to do an installer here. That seems like a bundler issue? nerdctl wants an installer for their "Windows supported" milestone, which would include buildkit for example.)

TBBle commented 10 months ago

I have drafted #4387 which fixes use of FROM mcr.microsoft.com/powershell:latest for example, as the only example I tested. It should fix all Windows multi-arch images using FROM, and also pre-fix any future surprise corner cases like unexpected cache layer hits on different OS versions.

profnandaa commented 5 months ago

Just updating on this thread that there is now experimental support on Windows. See docs/windows.md to get started. We are actively prioritizing any blocking issues coming from the experimental release, so feel free to open individual issues that you come across that are not captured here yet.

bplasmeijer commented 5 months ago

Awesome! Thanks to the many community members involved.

giuseppetrematerra commented 4 months ago

I don't know if it has been already reported, i'm experiencing some issue building windows image. Can't find anything related reported.

ERROR: failed to solve: process "cmd /S /C pwsh -Command \"choco install jre8 -y \"" did not complete successfully: buildkit executor not implemented for windows

The dockerfile directive is: RUN pwsh -Command "choco install jre8 -y"

profnandaa commented 4 months ago

@giuseppetrematerra -- did you setup a few things before that step, like pwsh and choco? can share your dockerfile?

gabriel-samfira commented 3 months ago

@giuseppetrematerra Docker may not yet have the buildkit executor hooked up to the new buildkitd windows support. You're most likely hitting this:

https://github.com/moby/moby/blob/master/builder/builder-next/executor_nolinux.go#L25

The RUN stanza requires the executor to be implemented in moby as well. You should be able to call into buildkitd directly, using buildctl.exe, if you have the latest version of buildkitd running.

FrankRichterAnsys commented 2 months ago

We probably should test and document the state of HyperV Isolation support. It'll be interesting for people on Windows 11 and Windows Server 2022 hosts to build Windows Server 2019 containers, but whether those people are numerous enough to make it a release goal, I'm unsure. dotnet-docker is also a test-case for this, they appear to still support LTSC2019 and I think we don't plan to support Windows Server LTSC 2019 as host for buildkitd. (But now I'm questioning that, did I confuse it with Docker 24? Or with LTSC2016 support?)

Being able to build container images for any Windows version on any other would definitely be helpful for developers. I'm somewhat regularly building Windows Server 2019 images on my workstation that runs a newer Windows 10 build - using a somewhat older Docker engine version w/o buildkit support (unfortunately), but supporting build --isolation=hyperv.

Is there already an issue covering this?... I'd love to at least monitor the work that's going on.