Open MarcosMorelli opened 5 years ago
Sorry, I'm not sure if we will ever start supporting this as it makes the build dependant on the configuration of a specific node and limits the build to a single node.
Sorry, I'm not sure if we will ever start supporting this as it makes the build dependant on the configuration of a specific node and limits the build to a single node.
That horse has bolted - SSH mount makes the build dependent upon the configuration of a single node - where did that dogma even get started?
That horse has bolted - SSH mount makes the build dependent upon the configuration of a single node
No, it does not. You can forward your ssh agent against any node or a cluster of nodes in buildx. Not really different than just using private images.
That horse has bolted - SSH mount makes the build dependent upon the configuration of a single node
No, it does not. You can forward your ssh agent against any node or a cluster of nodes in buildx. Not really different than just using private images.
Why would someone do that? ssh-agent is a something that needs to be fairly well locked down - why would someone forward it across an insecure connection?
I mean, that's a tangent anyway. Being able to run integration-tests in a docker build was an incredibly useful feature, one less VM to spin up, and one less iceberg to melt, it's just useful because it's efficient.
It's also great to not have to run nodejs, ruby, etc on the build host but instead just have them as container dependency, if you can do all your tests in a docker build container it's one less thing to lock down.
Anyhow, I apologise for running off on a tangent. All I'm saying is, it would be awesome if you could bring that functionality into the latest version of docker along with the means to temporary mount secrets. It's just a really lightweight way to run disposable VMs without touching the host or even giving any rights to run any scripts or anything on the host.
why would someone forward it across an insecure connection?
Why would that connection be insecure? Forwarding agent is more secure than build secrets because your nodes never get access to your keys.
if you can do all your tests in a docker build container it's one less thing to lock down. along with the means to temporary mount secrets
We have solutions for build secrets, privileged execution modes (where you needed docker run
before for more complicated integration tests) and persistent cache for your apt/npm cache etc. https://github.com/moby/buildkit/issues/1337 is implementing sidecar containers support. None of this breaks the portability of the build. And if you really want it, host networking is available for you.
None of this breaks the portability of the build. And if you really want it, host networking is available for you.
But I'd like to spin up a network for each build - and have all the stuff running that would be needed for the integration tests. But again, I have to loop back around and either do weird stuff with iptables, or run postgres on the host and share it with all builds (contention/secrets/writing to the same resources/etc).
You could see how it would be so much more encapsulated and attractive if I could spin up a network per build with a bunch of stub services and tear it down afterwards ?
why would someone forward it across an insecure connection?
Why would that connection be insecure? Forwarding agent is more secure than build secrets because your nodes never get access to your keys.
I'm talking about the socat hack where you forward the socket over TCP - you might have been referring to something else.
https://github.com/moby/buildkit/issues/1337 sounds cool but honestly, given the choice between something that right now works or something that will drop in 2 years time, I know what most of the community would choose.
you might have been referring to something else.
https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
you might have been referring to something else.
https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
Nah your secrets and forwarding feature is great - love it. Rocker had secrets support 3 years ago but that project withered on the vine.
The sidecar also sounds great and very clever and well structured. But again, 3 years ago I could build with secrets and talk to network services to run integration tests.
The sidecar also sounds great and very clever and well structured. But again, 3 years ago I could build with secrets and talk to network services to run integration tests.
Also, it does work in compose while build secrets does not.
Adding another use case where specifying the network would be useful: "hermetic builds".
I'm defining a docker network with --internal
that has one other container on the network, a proxy that is providing all the external libraries and files needed for the build. I'd like the docker build to run on this network without access to the external internet, but with access to that proxy.
I can do this with the classic docker build today, or I can create an entire VM with the appropriate network settings, perhaps it would also work if I setup a DinD instance, but it would be useful for buildkit to support this natively.
Adding another use case where specifying the network would be useful: "hermetic builds".
I'm defining a docker network with
--internal
that has one other container on the network, a proxy that is providing all the external libraries and files needed for the build. I'd like the docker build to run on this network without access to the external internet, but with access to that proxy.I can do this with the classic docker build today, or I can create an entire VM with the appropriate network settings, perhaps it would also work if I setup a DinD instance, but it would be useful for buildkit to support this natively.
Good point, I should have mentioned I was doing that too for git dependencies, and... Docker themselves have blogged about using it to augment the docker cache. Now I just burn the network, take lots of coffee breaks, and do my bit to melt the ice caps.
@bryanhuntesl The proxy vars are still supported. For this use case, cache mounts might be a better solution now https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#run---mounttypecache
This is particularly needed in environments such as Google Cloud Build where ambient credentials (via special-IP metadata service) are available only on a particular named network, not on the default network, in order to keep their exposure to build steps opt-in.
Any updates on this? I have also checked this issue https://github.com/moby/buildkit/issues/978, but can't find a straight answer. I've disabled buildKit
in the Docker Desktop configuration to be able to build my containers, but I'm guessing that is a workaround. Any progress on this would be appreciated.
The recommendation is to use buildx create --driver-opt network=custom
instead when you absolutely need this capability. The same applies to the google cloud build use case.
Thank you! It seemed like this was a weird use case, but it fits my needs for now. I'll be looking for a better solution, but in the meanwhile I'll use the recommendation.
The recommendation is to use
buildx create --driver-opt network=custom
instead when you absolutely need this capability. The same applies to the google cloud build use case.
Anyone have a working example of this in Github Actions? Not working for me.
Run docker/setup-buildx-action@v1
with:
install: true
buildkitd-flags: --debug
driver-opts: network=custom-network
driver: docker-container
use: true
env:
DOCKER_CLI_EXPERIMENTAL: enabled
Docker info
Creating a new builder instance
/usr/bin/docker buildx create --name builder-3eaacab9-d53e-490c-9020-xxx --driver docker-container --driver-opt network=custom-network --buildkitd-flags --debug --use
builder-3eaacab9-d53e-490c-9020-bae1d022b444
Booting builder
Setting buildx as default builder
Inspect builder
BuildKit version
moby/buildkit:buildx-stable-1 => buildkitd github.com/moby/buildkit v0.9.3 8d2625494a6a3d413e3d875a2ff7xxx
Build
/usr/bin/docker build -f Dockerfile -t my_app:latest --network custom-network --target production .
time="2022-01-19T17:00:XYZ" level=warning msg="No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load"
error: network mode "custom-network" not supported by buildkit. You can define a custom network for your builder using the network driver-opt in buildx create.
Error: The process '/usr/bin/docker' failed with exit code 1
@existere https://github.com/docker/buildx/blob/master/docs/reference/buildx_create.md#use-a-custom-network
I don't see how that setup is any different that my configuration. Am I missing something?
Use a custom network $ docker network create foonet $ docker buildx create --name builder --driver docker-container --driver-opt network=foonet --use $ docker buildx inspect --bootstrap $ docker inspect buildx_buildkit_builder0 --format={{.NetworkSettings.Networks}} map[foonet:0xc00018c0c0]
/usr/bin/docker buildx create --name builder-3eaacab9-d53e-490c-9020-xxx --driver docker-container --driver-opt network=custom-network --buildkitd-flags --debug --use
Here's the network create:
/usr/bin/docker network create custom-network
35bb341a1786f50af6b7baf7853ffc46926b62739736e93709e320xxx
/usr/bin/docker run --name my_container --network custom-network
I don't see how that setup is any different that my configuration
You don't pass the custom network name with build commands. Your builder instance is already part of that network.
OK, so once you've got it set up, how do you get name resolution to work? If I have a container foo
that's running on my custom network, and I do docker run --rm --network custom alpine ping -c 1 foo
, it's able to resolve the name foo
. Likewise, if I create a builder with docker buildx create --driver docker-container --driver-opt network=custom --name example --bootstrap
, and then docker exec buildx_buildkit_example0 ping -c 1 foo
, that works. But if I have a Dockerfile with RUN ping -c 1 foo
and then run docker buildx build --builder example .
, I get bad address foo
. If I manually specify the IP address, that works, but hard-coding an IP address into the Dockerfile hardly seems reasonable.
I have the same problem as @philomory. Name resolution doesn't work.
I am using network=cloudbuild
on Google Cloud platform, so I can't hardcode any IP address.
Step #2: #17 3.744 WARNING: Compute Engine Metadata server unavailable on attempt 1 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.750 WARNING: Compute Engine Metadata server unavailable on attempt 2 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.756 WARNING: Compute Engine Metadata server unavailable on attempt 3 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.762 WARNING: Compute Engine Metadata server unavailable on attempt 4 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.768 WARNING: Compute Engine Metadata server unavailable on attempt 5 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.771 WARNING: No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable
Step #2: #17 3.782 WARNING: Compute Engine Metadata server unavailable on attempt 1 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f85820>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.917 WARNING: Compute Engine Metadata server unavailable on attempt 2 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f85c40>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.925 WARNING: Compute Engine Metadata server unavailable on attempt 3 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f860d0>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.934 WARNING: Compute Engine Metadata server unavailable on attempt 4 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f85af0>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.942 WARNING: Compute Engine Metadata server unavailable on attempt 5 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f85880>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.944 WARNING: Failed to retrieve Application Default Credentials: Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable
Builder has been created with the following command:
docker buildx create --driver docker-container --driver-opt network=cloudbuild --name test --use
It seems GCE's metadata server IP is 169.254.169.254
(but I'm not sure if this is always the case), so this worked for me in Google Cloud Build:
docker buildx create --name builder --driver docker-container --driver-opt network=cloudbuild --use
docker buildx build \
--add-host metadata.google.internal:169.254.169.254 \
... \
.
and inside Dockerfile
(or using Cloud Client Libraries which use Application Default Credentials):
RUN curl "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google"
Thanks for the tips @fibbers, it works like a charm. It will do the job until a real fix.
@tonistiigi What's the right way to use the docker run
scenario you describe?
We have solutions for build secrets, privileged execution modes (where you needed
docker run
before for more complicated integration tests) and persistent cache for your apt/npm cache etc. moby/buildkit#1337 is implementing sidecar containers support. None of this breaks the portability of the build. And if you really want it, host networking is available for you.
I'm currently doing something like
# create network and container the build relies on
docker network create echo-server
docker run -d --name echo-server --network echo-server -p 8080:80 ealen/echo-server
# sanity check that the echo server is on the network
docker run --rm --network echo-server curlimages/curl http://echo-server:80
# create the Dockerfile, will need to hit echo-server during the build
cat << EOF > echo-client.docker
FROM curlimages/curl
RUN curl echo-server:80 && echo
EOF
# create the builder using the network from earlier
docker buildx create --name builder-5fa507d2-a5c6-4fb8-8a18-7340b233672e \
--driver docker-container \
--driver-opt network=echo-server \
--buildkitd-flags '--allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host' \
--use
# run the build, output to docker to sanity check
docker buildx build --file echo-client.docker \
--add-host echo-server:$(docker inspect echo-server | jq '.[0].NetworkSettings.Networks["echo-server"].IPAddress' | tr -d '"\n') \
--tag local/echo-test-buildx \
--output type=docker \
--builder builder-5fa507d2-a5c6-4fb8-8a18-7340b233672e .
Using add-host
like this seems like a dirty hack just to reach another container on the same network. What would be the right way to do this?
I've been seeing similar. You can run the build in a user specified network. But the buildkit container on that network has DNS set to the docker's localhost entry which won't get passed through to nested containers. So the RUN steps within the build don't have that DNS resolution. I'm not sure of the best way to get that to pass through, perhaps a proxy running in the buildkit container that lets DNS get set to the container IP instead of localhost?
If you're trying to access the GCE VM Metadata Server for authentication, it is possible. Apologies for not sharing this earlier. You're welcome to use this builder:
Docker Hub: https://hub.docker.com/r/misorobotics/cloudbuildx Source: https://github.com/MisoRobotics/cloudbuildx
name: misorobotics/cloudbuildx
args: [--tag=myimage, .]
Or misorobotics/cloudbuildx:multiarch
if you want multiarch.
For the curious, this file has the meat of it: https://github.com/MisoRobotics/cloudbuildx/blob/main/entrypoint.sh
The recommendation is to use
buildx create --driver-opt network=custom
instead when you absolutely need this capability. The same applies to the google cloud build use case.
But then there's a bunch of compose-specific build stuff you don't have access to...
Sorry, I'm not sure if we will ever start supporting this as it makes the build dependant on the configuration of a specific node and limits the build to a single node.
Makes no sense. Also, provably false; it used to work and therefore was already supported. Who exactly is "we"?
Is there some version of docker-compose v1 where DOCKER_BUILDKIT=1 somehow does not result in buildkit getting use? Because docker-compose build on Docker Compose version 2.2.3 produces output that looks like the clssic builder.
I don't understand why we can't have custom networks during build time anymore. It also seems buildx, docker, and compose are increasingly incongruous. For example, now that I'm trying to use the buildx work around, I see the project, compose_project_name, and args from compose and dockerfile do not work with buildx (maybe there's another work around that I've not found yet, of course).
It seems like compose + buildkit actually has more build features than buildkit/buildx, so buildx having custom networks and compose & docker not seems backwards.
@tonistiigi Why does it seem like bake is also incompatible with this? After creating the builder with the --driver-opt
option and --use
'ing it, buildx bake
still says ERROR: network mode "custom" not supported by buildkit
.
Why does it seem like bake is also incompatible with this? After creating the builder with the
--driver-opt
option and--use
'ing it,buildx bake
still saysERROR: network mode "custom" not supported by buildkit
.
This might be a config mistake, or a bug in bake. buildx bake
should be being told to use the builder you created on the custom network (if not default), but not itself being told about the custom network with --network
.
If you can get your build working with docker buildx build
but not docker buildx bake
then I'd suggest opening a new issue with repro steps so any issue in bake can be tracked down and fixed.
This might be a config mistake, or a bug in bake.
buildx bake
should be being told to use the builder you created on the custom network (if not default), but not itself being told about the custom network with--network
.
Right, that's why I opened this a couple weeks ago: https://github.com/docker/buildx/issues/1761
If you can get your build working with
docker buildx build
but notdocker buildx bake
then I'd suggest opening a new issue with repro steps so any issue in bake can be tracked down and fixed.
I haven't been able to, but I need to put more effort into it. It just feels like too many nested workarounds at this point to set myself up to depend on.
What stopped me from docker buildx build
was that I couldn't get it to find the (first layer of) images that were already created, which buildx build
extends via FROM
in this service's Dockerfile. It would just try to download them from docker.io
.
It also wouldn't read the ${project}
env var that has always worked in the Dockerfile.
I will send $100 to anyone who adds custom networks back to docker compose by default (without disabling BuildKit). Sorry it is low, but I'm not valued in the billions of dollars, unlike the Docker corporation.
I can't cope anymore. Going in circles trying to mitigate functionality issues between compose/docker/buildx/bake/buildkit when all I want is to be in one coherent environment (compose). It just overcomplicates everything in an already complicated environment which was previously working well. Until then I'll continue to disable BuildKit like the folks above. Considering moving my infra over to podman or just baremetal.
Maybe @bryanhuntesl or others wouldn't mind chipping in if my offer is to meager on its own.
@dm17 - recommend opening a new issue and referencing this one. Within it, provide a sample docker-compose.yaml file and working/non-working command line examples (even if it's using older versions of compose).
You're more likely to get someone looking into this if they can easily recreate the scenario you're encountering 👍
It caught my attention and I'm intrigued, however, this is already quite a busy thread already making it a bit more challenging to fully understand how you're seeing this.
I will send $100 to anyone who adds custom networks back to docker compose by default (without disabling BuildKit). Sorry it is low, but I'm not valued in the billions of dollars, unlike the Docker corporation.
I can't cope anymore. Going in circles trying to mitigate functionality issues between compose/docker/buildx/bake/buildkit when all I want is to be in one coherent environment (compose). It just overcomplicates everything in an already complicated environment which was previously working well. Until then I'll continue to disable BuildKit like the folks above. Considering moving my infra over to podman or just baremetal.
Maybe @bryanhuntesl or others wouldn't mind chipping in if my offer is to meager on its own.
I agree with you 100%. Usually most of my builds can be done in a single network, but I have a project that uses NuxtJS and CMS called Strapi and during static build time they need to connect and it happens betweens two separate hosts where I need to use the networked Docker.
Anyway, I read somewhere that docker swarm has the 'overlay' network capability that can achieve networking between hosts, but I have not tried Docker Swarm myself and it would ofcourse take time to 'rebuild' it all to the new syntax and logic.
@TafkaMax
docker swarm has the 'overlay' network capability
I don't think swarm adds any build-time capabilities, which is the issue here.
@spurin "You're more likely to get someone looking into this if they can easily recreate the scenario you're encountering"
There's nothing to reproduce really. It should be clear to everyone that custom networks during build time used to work (due to them being supported in the classic builder) and now no longer work (due to BuildKit not supporting them).
this is crazy, docker compose
was used to spin up dependencies e.g. db, redis etc. But build-push-action
not able to connect to that docker containers. This is critical...
I try use:
driver-opts: |
network=custom
in docker/setup-buildx-action@v3
and then run docker/metadata-action@v5
and during build it not able to resolve host name for db
and redis
.
What should I do?
That's not nearly enough information to get help, I fear. Is this repository public? Then we could actually see what you've tried to do, and possibly work out what's gone wrong.
Edit: Looking back, e.g. this comment and the preceding few suggests that DNS resolution during building specifically may not be correctly honouring the custom network.
That's already been specifically reported at https://github.com/docker/buildx/issues/1347, but turns out to be a BuildKit issue, rather than buildx, so the real issue to track (AFAICT) is https://github.com/moby/buildkit/issues/3210 or https://github.com/moby/buildkit/issues/2404. Probably both, at an initial glance. I suspect the former is a duplicate of the latter, since when using a custom network for the docker-container driver, from the perspective of the BuildKit running in that container, that custom network is the host network.
You might be able to use the workaround shown in https://github.com/docker/buildx/issues/175#issuecomment-1099567863 with add-hosts, but possibly you'll need a separate build-step to extract the IPs from the custom network, as (I hope) you can't smuggle arbitrary subshell executions into build-push-action
...
Ok, so for those who is looking for some solution like me, here I created repo https://github.com/Hronom/buildx-add-host-example with workaround that is aggregated from this thread and from this topic.
There I put examples for local usage and GitHub Actions usage with setup-buildx-action
and build-push-action
@poconnor-lab49 big respect to you! Thanks for the inspiration in this topic
Shame to buildkit/buildx that you still not able to find normal solution within 4 years for common and popular use case. This is really sad story.
Indeed @Hronom, it feels like gaslighting when they act like 1) why would you ever need to do this 2) you can still do it via some work around that doesn't actually work (buildx) 3) no one should need to do this (despite it being a feature of previously working software) 4) ignore this thread for so many years
I am currently hitting this issue, too, with the following setup on my Jenkins. I want to a) spin up a postgres docker image b) build a python library inside a Dockerfile, while running tests against said postgres database with a fixed name.
The issue is that my company wants me to use the docker plugin for Jenkins (https://plugins.jenkins.io/docker-workflow/, see https://docs.cloudbees.com/docs/cloudbees-ci/latest/pipelines/docker-workflow)
The Jenkinsfile code looks similar to this here:
docker network create flow-network
docker.image('postgres:15.4-bookworm').withRun('--network=flow-network --name postgres-db...') {
c ->
docker.build(TAG, "--network flow-network .") // This will run python code assuming DB available at postgres-db
}
Now, I could rewrite this code to work with buildx as said above, but then I'd need to use basic shell syntax as opposed to the plugin, which will perform clean-up activities in case of failures automatically.
My pipelines are still working fine for some time, but trying to create a new container locally I now notice that docker build
is not working anymore. Apparently docker build
is an alias for buildx
(Whatever that is) but buildx can't have a simple 'network' flag to build with a network? (The recommendation is still to disable the default network on local installs and create a custom bridge network, so this seems quite essential??).
So, to get a build going locally, I now have to 'create a buildx node' with the network flag, then tell buildx to use that node, then when use 'buildx build' instead of just 'build', and the first thing it does is load up some buildx-image.
.. why? If the build system is being replaced by buildx, at least make it seamless and feature-parity before doing things like this (or make users opt in to the experimental stuff).
Making the docker build
commandline compatible by including an option to set the network to use makes it already a lot better, but I'm sure other people have other issues with it reading this thread.
I was able to reach the containers using --network "host"
but this is not good enough, since some people that would run this command are not using Linux, and this flag does not work in other major OS' like Mac and Windows (even with WSL).
So, this other approach apparently worked for me:
# start the necessary container using docker-compose
# so it already creates the network and attaches the container to it
# this container exposes a port and is attached to the network some-network
docker-compose up -d some-database
# get the gateway IP for the some-network
SOME_NETWORK_GW_IP=$(docker container inspect some-database --format='{{ $network := index .NetworkSettings.Networks "some-network" }}{{ $network.Gateway }}')
docker build . --add-host "some-database:${SOME_NETWORK_GW_IP}" --tag "some-api" --target "some-api" --file "Dockerfile.api"
Update: This doesn't work in Windows WSL, unfortunately.
I think it's a lot of different workarounds , for something that could be simple .
With the buildx plug-in (which is in the install docs these days ) the 'build' command doesn't manage to pass the '--network' parameter along to the buildx builder being automatically created.
And since the '--network' parameter exists and tries to do something with buildx it seems like a bug to me, but that's probably the view from my limited bubble.
On Wed, Dec 20, 2023, 19:57 Vinícius Gajo @.***> wrote:
So, this other approach apparently worked for me:
start the necessary container using docker-compose# so it already creates the network and attaches the container to it# this container exposes a port and is attached to the network some-network
docker-compose up -d some-database
get the gateway IP for the some-network
SOME_NETWORK_GW_IP=$(docker container inspect some-database --format='{{ $network := index .NetworkSettings.Networks "some-network" }}{{ $network.Gateway }}')
docker build . --add-host "some-database:${SOME_NETWORK_GW_IP}" --tag "some-api" --target "some-api" --file "Dockerfile.api"
— Reply to this email directly, view it on GitHub https://github.com/docker/buildx/issues/175#issuecomment-1864984810, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALMD2DWKZAVWPD455M7RB2TYKMYIZAVCNFSM4JEHNZIKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOBWGQ4TQNBYGEYA . You are receiving this because you commented.Message ID: @.***>
Indeed, ideally it would be more simple. These workarounds are ugly as hell.
I found a nice workaround, it's also relevant to any other frontend framework too, short workaround strategy is this:
host
network mode available, meaning by default the build happens in sandboxed
isolated mode, so we switch our Next.js app build time network to host mode from the isolationhost
network, so follow this rule: From your container's perspective localhost is the container itself. If you want to communicate with the host machine, you will need the IP address of the host itself. Depending on your Operating System and Docker configuration, this IP address varies. host.docker.internal is recommended from Docker version 18.03 onward.
host.docker.internal
as Strapi app hostname, if we take into account Strapi+Postgres container up and running with :1337
port exposed for Strapi we need connection string value like this
NEXT_PUBLIC_STRAPI_API_URL=http://host.docker.internal:1337/graphql
That way we have build and runtime both working just fine, without external network setup, just two independent docker-compose files started locally on the same machine It works like that:
host
mode from the sandboxed
mode -> it builds and asks graphql for data -> connects to host.docker.internal:1337
-> Docker's internal network redirects it to localhost:1337
-> Docker routes this to locally exposed Strapi running at :1337
port -> connection is fineso, frontend build stage can connect straight to internet to fetch cloud CMS instance or host.docker.internal
connection to access another local docker-compose setup, but it can't connect directly to localhost:1337
or attach to custom network and use DNS there like http://strapi:1337/
, so the build stage network setup is very limited and needs some tweaking like this, only https://cms-in-the-cloud.com/graphql
or only http://host.docker.internal:1337/graphql
and work outside the default sandboxed mode set for docker build
There are only two ways to achieve the desired result with new Buildkit engine used for Docker
Option one: Use CMS deployment from the cloud and connect Next.js during build phase to it
Option two: Use local CMS deployment from different network, but connect to it via host.docker.internal
This will work for you on the Windows machine with latest Docker version used (Docker v3 and above with Buildkit engine), for Linux/MacOs consider to use 127.0.0.1 address instead if you have older versions of Docker
To make this work follow the external network configuration at Docker like this
Docker-compose A for Next.js
Specify .env connection to Strapi like this NEXT_PUBLIC_STRAPI_API_URL=http://host.docker.internal:1337/graphql
version: '3.8'
services:
webnextjs:
container_name: webnextjs
....
....
build:
network: host
ports:
- '3000:3000'
Docker-compose B for Strapi + PostgreSQL for data storage
version: "3.8"
services:
strapiDB:
container_name: strapiDB
....
....
ports:
- "5432:5432"
expose:
- "5432"
networks:
- strapiPostgresNetwork
strapi:
container_name: strapi
....
....
ports:
- "1337:1337"
expose:
- "1337"
depends_on:
strapiDB:
networks:
strapiPostgresNetwork:
networks:
strapiPostgresNetwork:
driver: bridge
name: strapiPostgresNetwork
Run the Docker-compose B to get Strapi up, and then run the Docker-compose A to run the Next.js build+run process that will go through the build
and run
phases of it nicely through the
NEXT_PUBLIC_STRAPI_API_URL=http://host.docker.internal:1337/graphql
connection, that's it!
More docs:
Read https://github.com/docker/buildx/issues/591#issuecomment-816843505 to see Buildkit supports none, host, default(sandboxed) as network mode. Supporting custom network names is currently not planned as it conflicts with the portability and security guarantees.
and don't use any other custom networks with Buildkit, that's the new reality of docker build
command.
In short, custom network cannot be specified for the build step of the docker-compose anymore, meaning that connecting two docker-compose files into a single custom external network to have build running as before is not feasible anymore after the Docker moved to this https://docs.docker.com/build/buildkit/ engine which was enabled by-default in v23.0.0 on 2023-02-01 (https://docs.docker.com/engine/release-notes/23.0/#2300)
With custom network runtime phase is always fine but this Docker v3 update breaks the build phase completely making the build impossible if any getStaticProps
static asset can't be fetched during the build phase (docker build)
and fails the build so it never reaches the runtime phase (docker run)
(if you skip build and got to runtime, it will work)
The use case for this setup:
Your organization has website/Strapi
and website/Nextjs
repositories separately, but you just want to run them both locally in same Docker network running website/Strapi
docker-compose file and website/Nextjs
docker-compose file with build step and getStaticProps, this is stupid simple use case we all use in everyday life and this is where Docker v3 introduced that breaking change.
Any other solution I found is to use docker buildx ...
but that way you lose the docker-compose
features or merge strapi and next.js in one big monorepo with big single docker-compose file
Thank you @VladimirAndrianov96 - it is a great summary of how they're wasting our time by removing features they claim we didn't want/need.
Does anyone know if podman/podman-compose has custom networks during build time? I'm ready to take the leap, despite the unpaid dev hours it'll take.
Background: Running a simple integration test fails with network option:
docker network create custom_network
docker run -d --network custom_network --name mongo mongo:3.6
docker buildx build --network custom_network --target=test .
Output:
network mode "custom_network" not supported by buildkit
Still not supported? Code related: https://github.com/docker/buildx/blob/master/build/build.go#L462-L463