Open mbentley opened 1 year ago
I ran into the same issue even with the example given in the documentation.
docker buildx imagetools inspect --raw alpine | jq '.manifests[0] | .platform."os.version"="10.1"' > descr.json
$ docker buildx imagetools create -f descr.json myuser/image
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x1051a3e1c]
goroutine 23 [running]:
github.com/docker/buildx/util/imagetools.(*Resolver).Combine.func1.1()
/src/util/imagetools/create.go:35 +0x6c
golang.org/x/sync/errgroup.(*Group).Go.func1()
/src/vendor/golang.org/x/sync/errgroup/errgroup.go:75 +0x60
created by golang.org/x/sync/errgroup.(*Group).Go
/src/vendor/golang.org/x/sync/errgroup/errgroup.go:72 +0xa8
~/SAPDevelop/git/ocm/ocmdockerbuild> docker buildx imagetools create -f alpine.json -t mypatched:1.0.0
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x101667e1c]
goroutine 39 [running]:
github.com/docker/buildx/util/imagetools.(*Resolver).Combine.func1.1()
/src/util/imagetools/create.go:35 +0x6c
golang.org/x/sync/errgroup.(*Group).Go.func1()
/src/vendor/golang.org/x/sync/errgroup/errgroup.go:75 +0x60
created by golang.org/x/sync/errgroup.(*Group).Go
/src/vendor/golang.org/x/sync/errgroup/errgroup.go:72 +0xa8
Is there any ETA when this gets fixed?
Yes I can. I am now running github.com/docker/buildx v0.11.0 687feca9e8dcd1534ac4c026bc4db5a49de0dd6e
I had another failure two days ago:
+ docker buildx imagetools create --progress plain -t mbentley/omada-controller:5.6 mbentley/omada-controller:5.6-amd64 mbentley/omada-controller:5.6-arm64 mbentley/omada-controller:5.6-armv7l
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x117206e]
goroutine 30 [running]:
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace.(*clientTracer).end(0xc000a40180, {0x236d094, 0xc}, {0x0?, 0x0?}, {0xc0000e8b00?, 0x4, 0x4})
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace@v0.40.0/clienttrace.go:231 +0x76e
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace.(*clientTracer).gotConn(0x26de9a8?, {{0x26e6f98?, 0xc0005fc000?}, 0xf8?, 0x5e?, 0x1?})
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace@v0.40.0/clienttrace.go:288 +0x645
net/http.http2traceGotConn(0xc00071c000?, 0xc000002c00, 0x0)
net/http/h2_bundle.go:10096 +0x1ee
net/http.(*http2Transport).RoundTripOpt(0xc0001b5b90, 0xc0000e8900, {0xa0?})
net/http/h2_bundle.go:7522 +0x1ac
net/http.(*http2Transport).RoundTrip(...)
net/http/h2_bundle.go:7475
net/http.http2noDialH2RoundTripper.RoundTrip({0x37dc480?}, 0xc0000e8900?)
net/http/h2_bundle.go:10060 +0x1b
net/http.(*Transport).roundTrip(0x37dc480, 0xc0000e8900)
net/http/transport.go:548 +0x3ca
net/http.(*Transport).RoundTrip(0x37dbbc0?, 0x26de9a8?)
net/http/roundtrip.go:17 +0x19
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*Transport).RoundTrip(0xc00029d500, 0xc0000e8100)
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@v0.40.0/transport.go:116 +0x5e2
net/http.send(0xc0000e8100, {0x26bf500, 0xc00029d500}, {0x10?, 0x22f16a0?, 0x0?})
net/http/client.go:252 +0x5f7
net/http.(*Client).send(0xc0005f2660, 0xc0000e8100, {0x7f9af591a8f0?, 0xc0006406c0?, 0x0?})
net/http/client.go:176 +0x9b
net/http.(*Client).do(0xc0005f2660, 0xc0000e8100)
net/http/client.go:716 +0x8fb
net/http.(*Client).Do(...)
net/http/client.go:582
github.com/containerd/containerd/remotes/docker.(*request).do(0xc0005f6090, {0x26de9a8, 0xc0005f2480})
github.com/containerd/containerd@v1.7.2/remotes/docker/resolver.go:589 +0x686
github.com/containerd/containerd/remotes/docker.(*request).doWithRetries(0x21a16a0?, {0x26de9a8, 0xc0005f2480}, {0x0, 0x0, 0x0})
github.com/containerd/containerd@v1.7.2/remotes/docker/resolver.go:600 +0x4a
github.com/containerd/containerd/remotes/docker.dockerFetcher.open({0xc0005f6090?}, {0x26de9a8, 0xc0005f2480}, 0xc0005f6090, {0xc0005b0960?, 0x2360d3e?}, 0x0)
github.com/containerd/containerd@v1.7.2/remotes/docker/fetcher.go:262 +0x3d7
github.com/containerd/containerd/remotes/docker.dockerFetcher.Fetch.func1(0x40dbea?)
github.com/containerd/containerd@v1.7.2/remotes/docker/fetcher.go:131 +0x8cc
github.com/containerd/containerd/remotes/docker.(*httpReadSeeker).reader(0xc000126c00)
github.com/containerd/containerd@v1.7.2/remotes/docker/httpreadseeker.go:146 +0xb8
github.com/containerd/containerd/remotes/docker.(*httpReadSeeker).Read(0xc000126c00, {0xc0005ec200, 0x200, 0x200})
github.com/containerd/containerd@v1.7.2/remotes/docker/httpreadseeker.go:52 +0x45
bytes.(*Buffer).ReadFrom(0xc0005f24b0, {0x7f9af5506098, 0xc000126c00})
bytes/buffer.go:202 +0x98
io.copyBuffer({0x26bb420, 0xc0005f24b0}, {0x7f9af5506098, 0xc000126c00}, {0x0, 0x0, 0x0})
io/io.go:413 +0x14b
io.Copy(...)
io/io.go:386
github.com/docker/buildx/util/imagetools.(*Resolver).GetDescriptor(0x9?, {0x26de900, 0xc00013a7d0}, {0xc0005b0930, 0x2d}, {{0xc0005b0960, 0x2e}, {0xc00073c3c0, 0x47}, 0x11aa, ...})
github.com/docker/buildx/util/imagetools/inspect.go:109 +0x137
github.com/docker/buildx/util/imagetools.(*Resolver).loadPlatform(0xc000551700?, {0x26de900, 0xc00013a7d0}, 0xc0005db3e0, {0xc0005b0930, 0x2d}, {0xc00073e800, 0x507, 0x800})
github.com/docker/buildx/util/imagetools/create.go:221 +0x1a5
github.com/docker/buildx/util/imagetools.(*Resolver).Combine.func1.1()
github.com/docker/buildx/util/imagetools/create.go:59 +0x365
golang.org/x/sync/errgroup.(*Group).Go.func1()
golang.org/x/sync@v0.2.0/errgroup/errgroup.go:75 +0x64
created by golang.org/x/sync/errgroup.(*Group).Go
golang.org/x/sync@v0.2.0/errgroup/errgroup.go:72 +0xa5
@jedevc Could this be another containerd issue? :cold_sweat:
Maybe also related to something like https://github.com/containerd/containerd/pull/8379? This code is kinda intricate, it looks like it's at least around the same kind of place.
Contributing guidelines
I've found a bug and checked that ...
Description
When running a
docker buildx imagetools create...
command in my CI, it will occasionally panic on a random repo. It doesn't seem to be specific to any particular images/builds.Expected behaviour
It wouldn't panic. Seems to be different than issues like https://github.com/docker/buildx/issues/1521, https://github.com/docker/buildx/issues/1425
Actual behaviour
Buildx version
github.com/docker/buildx v0.10.4 c513d34049e499c53468deac6c4267ee72948f02
Docker info
Builders list
Configuration
Here is the Dockerfile from the build:
Then the build commands:
And then the
docker buildx imagetools create...
command:Build logs
Additional info
No response