Open sisp opened 4 years ago
Yes for solve request (i.e. buildctl build
)
Would you mind giving a quick example of, e.g., an NGINX configuration? I don't seem to be able to get it to work.
My NGINX TCP configuration is this:
stream {
upstream buildkit {
server 127.0.0.1:1234;
}
server {
listen 1234;
proxy_pass buildkit;
}
}
But how would the HTTP configuration look? This doesn't work:
http {
upstream buildkit {
server 127.0.0.1:1234;
}
server {
listen: 80;
location / {
proxy_pass http://buildkit/;
}
}
}
Ah, I misread the question. Haven't tried HTTP proxy. It should work if it supports HTTP/2, otherwise not.
Okay, let me check if I can get it to work with HTTP/2.
Neither of the following NGINX configurations seems to work:
http {
upstream buildkit {
server 127.0.0.1:1234;
}
server {
listen 8000 http2;
location / {
proxy_pass http://buildkit;
}
}
}
http {
upstream buildkit {
server 127.0.0.1:1234;
}
server {
listen 8000 http2;
location / {
grpc_pass grpc://buildkit;
}
}
}
This is the buildctl
command I use which works with --addr tcp://127.0.0.1:1234
:
$ buildctl --addr tcp://127.0.0.1:8000 build --frontend dockerfile.v0 --local context=. --local dockerfile=.
2020/04/29 10:23:02 http2: server: error reading preface from client localhost: rpc error: code = Internal desc = Bad Request: HTTP status code 400; transport: received the unexpected content-type "text/html"
[+] Building 0.0s (0/0)
error: failed to receive status: rpc error: code = Internal desc = Bad Request: HTTP status code 400; transport: received the unexpected content-type "text/html"
buildkitd
doesn't receive any traffic according to the logs (even with --debug
) which means NGINX doesn't forward traffic to buildkitd
.
AFAIK, nginx (and generally http2) doesn't support upstreams on http2, only http1.1 something h3 is addressing
HAProxy does seem to have some support for it tho
Is this case different from what's demonstrated in this NGINX blog post? https://www.nginx.com/blog/nginx-1-13-10-grpc/
as i said, AFAIK :p
it might have changed.
but that seems to be grpc, not h2, but i could be mistaken.
I have been able to get it to work with Traefik by following their gRPC guide with the following ./traefik-config/config.yaml
file
http:
routers:
buildkit:
service: buildkit
rule: PathPrefix(`/`)
services:
buildkit:
loadBalancer:
servers:
- url: h2c://127.0.0.1:1234
and starting Traefik like this:
$ ./traefik --entryPoints.web.address=:8000 --providers.file.directory=./traefik-config
There's one missing piece though. I'd like to expose buildkitd
only at a specific domain, e.g. buildkit.<DOMAIN>
, which according to Traefik's gRPC guide means setting the router rule to:
rule: Host(`buildkit.<DOMAIN>`)
When I change the Traefik configuration like this, I'm getting the following error again:
$ buildctl --addr tcp://buildkit.localhost:8000 build --frontend dockerfile.v0 --local context=. --local dockerfile=.
2020/04/29 11:02:19 http2: server: error reading preface from client localhost: rpc error: code = Unimplemented desc = Not Found: HTTP status code 404; transport: received the unexpected content-type "text/plain; charset=utf-8"
[+] Building 0.0s (0/0)
error: failed to receive status: rpc error: code = Unimplemented desc = Not Found: HTTP status code 404; transport: received the unexpected content-type "text/plain; charset=utf-8"
I assume Traefik expects the Host
header for this routing rule and buildctl
doesn't provide the header, but this is just my guess. Does someone who is familiar with buildctl
's internals know?
Not sure if this is a best practices way of doing it, but I've been able to create a Traefik router rule based on the X-Forwarded-Host
header:
rule: Headers(`X-Forwarded-Host`, `tcp://buildkit.localhost:8000`)
This is a log entry in the debug log:
{
"Method": "POST",
"URL": {
"Scheme": "",
"Opaque": "",
"User": null,
"Host": "",
"Path": "/moby.buildkit.v1.Control/Solve",
"RawPath": "",
"ForceQuery": false,
"RawQuery": "",
"Fragment": ""
},
"Proto": "HTTP/2.0",
"ProtoMajor": 2,
"ProtoMinor": 0,
"Header": {
"Content-Type": [
"application/grpc"
],
"Te": [
"trailers"
],
"User-Agent": [
"grpc-go/1.27.1"
],
"X-Forwarded-Host": [
"tcp://buildkit.localhost:8000"
],
"X-Forwarded-Port": [
"80"
],
"X-Forwarded-Proto": [
"http"
],
"X-Forwarded-Server": [
"xxxxxxx"
],
"X-Real-Ip": [
"127.0.0.1"
]
},
"ContentLength": -1,
"TransferEncoding": null,
"Host": "tcp://buildkit.localhost:8000",
"Form": null,
"PostForm": null,
"MultipartForm": null,
"Trailer": null,
"RemoteAddr": "127.0.0.1:52522",
"RequestURI": "/moby.buildkit.v1.Control/Solve",
"TLS": null
}
But it seems mTLS between buildctl
and buildkitd
doesn't work with an HTTP/2 reverse proxy in between.
This is achievable using the nginx grpc module
Is what you're suggesting different from my NGINX config using grpc_pass
here?
It must be. I had this working when I posted this comment but have since deleted my local hack 😢. I'll see if I can dig it out... Just an aside but I stopped the hack because althought I was able to proxy to a single buildkitd instance via nginx the client-side routing (consistent hashing) increases the complexity of proxying to a point where it would be difficult to manage IMO.
Could you please elaborate on your point regarding "client-side routing (consistent hashing)"?
Sure. If you want to take advantage of the caching capability of buildkitd
in a multi-node cluster then an option is to use client-side routing via consistent hashing. What this means is that rather than load-balancing across the buildkitd
nodes via a proxy the routing takes place on the client. Common practice is to list the pods in the buildkitd
cluster to create a hash ring data-structure and then to hash the name of the image being built and pass this hash to the hash ring which will return a buildkitd
pod name which you can communicate with directly. The benefit of doing this is that the image will always be built on the same buildkitd
pod (provided the buildkitd
cluster remains the same size) and therefore the cache can be used to improve performance.
There's an example here.
My point was that although you can proxy to buildkitd
via nginx
, enabling client side routing is non-trivial and the complexity outweighs the benefit IMO.
Thank you very much, I was guessing it was related to caching, but now it's much clearer.
Is it possible for the BuildKit daemon deployed as a TCP service to receive traffic behind an HTTP reserve proxy? I've been able to access the daemon behind NGINX configured to forward TCP traffic but not with HTTP proxying.