Closed cheerfulstoic closed 3 years ago
Having the same problem. Given the following snippet:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.3
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms256m -Xmx512m"
- rest.action.multi.allow_explicit_index=false
ends up with
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
on the new MBP M1.
Same issue here using Elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
volumes:
- ./configs/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:cached
- ./configs/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:cached
ports:
- 9200:9200
healthcheck:
test: curl http://127.0.0.1:9200/_cat/health
interval: 5s
timeout: 10s
retries: 5
environment:
http.host: "0.0.0.0"
transport.host: "127.0.0.1"
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
xpack.security.enabled: "false"
networks:
default:
aliases:
- $host_elasticsearch
~```
I have the same problem but not just related to docker-compose
.
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --platform linux/amd64 elasticsearch:7.10.1
also results in a error for me on M1.
Can the images mentioned above start properly with docker run
?
Also, I guess it is not related to QEMU + ARM + Java cause jetty is running fine (credits to dnjo from the preview slack channel)
docker run -p 80:8080 -p 443:8443 --rm -it --platform linux/amd64 jetty:9-jdk8 /bin/bash
jetty@6041fd4106b5:~$ java -version
openjdk version "1.8.0_275"
OpenJDK Runtime Environment (build 1.8.0_275-b01)
OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
jetty@6041fd4106b5:~$ arch
x86_64
jetty@6041fd4106b5:~$ /docker-entrypoint.sh
2020-12-14 22:20:25.000:INFO:docker-entrypoint:jetty start from /var/lib/jetty/jetty.start
2020-12-14 22:20:26.818:INFO::main: Logging initialized @925ms to org.eclipse.jetty.util.log.StdErrLog
2020-12-14 22:20:27.979:INFO:oejs.Server:main: jetty-9.4.35.v20201120; built: 2020-11-20T21:17:03.964Z; git: bdc54f03a5e0a7e280fab27f55c3c75ee8da89fb; jvm 1.8.0_275-b01
2020-12-14 22:20:28.049:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:///var/lib/jetty/webapps/] at interval 1
2020-12-14 22:20:28.122:INFO:oejs.AbstractConnector:main: Started ServerConnector@6a4f787b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2020-12-14 22:20:28.161:INFO:oejs.Server:main: Started @2290ms
I guess running docker remotely on a vps is the only foreseeable solution for next couple months, until every package is supported by ARM.
so is this an overall qemu error? this really hamstrings M1 dev a bit, which I can deal with because that's what we get for buying into essentially a hardware beta test
Having the same problem. Trying to run Alfresco 6 on the new Macbook M1.
I was getting this very randomly when using docker via VS code terminal. The VS code terminal is an amd64 process. Switching to the VS code insiders edition (has native support for M1) made this go away for me.
Just tried with the latest build and this is still happening (that might not be a surprise π). I tried "Reset to factory defaults" and tried again, to be sure
Having the same issue. Trying to run KinD on the new Macbook M1.
kind create cluster
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.20.2) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 139
Command Output: qemu: uncaught target signal 11 (Segmentation fault) - core dumped
I used to have this issue once in a while with postgresql and django containers. Not anymore, my solution is to clear all container and images cache. Pull again to get the native containers and update Dockerfile if there are libraries that need to install separately.
Previously when I use time machine to transport everything to new macbook all the cached containers and images are copied over as well. However, theyβre not native.
any solutions for this issue? π
Just downloaded the latest build announced today and this is still failing. Seems pretty common with 14 π on the original issue, though maybe other issues have more π
I'm getting the same error when trying to build a Terraform Provider through docker buildx build --platform linux/amd64
.
#26 24.96 go: downloading go.opencensus.io v0.22.0
#26 25.41 go: downloading github.com/jmespath/go-jmespath v0.3.0
#26 25.53 go: downloading github.com/hashicorp/golang-lru v0.5.1
#26 42.82 # github.com/zclconf/go-cty/cty/function/stdlib
#26 42.82 qemu: uncaught target signal 11 (Segmentation fault) - core dumped
#26 56.18 # google.golang.org/grpc/health/grpc_health_v1
#26 56.18 SIGSEGV: segmentation violation
#26 56.18 PC=0x40276809ed m=8 sigcode=0
#26 56.18
#26 56.18 goroutine 27 [running]:
#26 56.18 runtime: unknown pc 0x40276809ed
#26 56.18 stack: frame={sp:0x15, fp:0x0} stack=[0xc000a82000,0xc000a8a000)
#26 56.18
#26 56.18 runtime: unknown pc 0x40276809ed
#26 56.18 stack: frame={sp:0x15, fp:0x0} stack=[0xc000a82000,0xc000a8a000)
#26 56.18
#26 56.18 created by cmd/compile/internal/gc.compileFunctions
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:382 +0x129
#26 56.18
#26 56.18 goroutine 1 [chan send]:
#26 56.18 cmd/compile/internal/gc.compileFunctions()
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:390 +0x186
#26 56.18 cmd/compile/internal/gc.Main(0xcc7d20)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/main.go:768 +0x361a
#26 56.18 main.main()
#26 56.18 /usr/local/go/src/cmd/compile/main.go:52 +0xb1
#26 56.18
#26 56.18 goroutine 28 [runnable]:
#26 56.18 cmd/compile/internal/ssa.(*sparseSet).add(...)
#26 56.18 /usr/local/go/src/cmd/compile/internal/ssa/sparseset.go:39
#26 56.18 cmd/compile/internal/ssa.branchelim(0xc000af6f20)
#26 56.18 /usr/local/go/src/cmd/compile/internal/ssa/branchelim.go:39 +0x1cd
#26 56.18 cmd/compile/internal/ssa.Compile(0xc000af6f20)
#26 56.18 /usr/local/go/src/cmd/compile/internal/ssa/compile.go:96 +0x98d
#26 56.18 cmd/compile/internal/gc.buildssa(0xc00049b340, 0x1, 0x0)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/ssa.go:470 +0x11ba
#26 56.18 cmd/compile/internal/gc.compileSSA(0xc00049b340, 0x1)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:319 +0x5d
#26 56.18 cmd/compile/internal/gc.compileFunctions.func2(0xc0008e5aa0, 0xc0000c0650, 0x1)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:384 +0x4d
#26 56.18 created by cmd/compile/internal/gc.compileFunctions
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:382 +0x129
#26 56.18
#26 56.18 goroutine 29 [runnable]:
#26 56.18 fmt.(*pp).doPrintf(0xc0000ab380, 0xca34f3, 0xc, 0xc000a517d8, 0x1, 0x1)
#26 56.18 /usr/local/go/src/fmt/print.go:974 +0x124b
#26 56.18 fmt.Sprintf(0xca34f3, 0xc, 0xc000a517d8, 0x1, 0x1, 0xc000a517e8, 0xac98e6)
#26 56.18 /usr/local/go/src/fmt/print.go:219 +0x66
#26 56.18 cmd/compile/internal/gc.(*Liveness).emit.func1(0xc000b65880, 0x8)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/plive.go:1217 +0xba
#26 56.18 cmd/compile/internal/gc.(*Liveness).emit(0xc0000c7900, 0x200, 0xc000b6e170)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/plive.go:1222 +0x545
#26 56.18 cmd/compile/internal/gc.liveness(0xc000c539b0, 0xc000c458c0, 0xc000b6a7e0, 0xb, 0xcc8488, 0x0)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/plive.go:1269 +0x36b
#26 56.18 cmd/compile/internal/gc.genssa(0xc000c458c0, 0xc000b6a7e0)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/ssa.go:6301 +0x95
#26 56.18 cmd/compile/internal/gc.compileSSA(0xc000488f20, 0x2)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:329 +0x3a5
#26 56.18 cmd/compile/internal/gc.compileFunctions.func2(0xc0008e5aa0, 0xc0000c0650, 0x2)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:384 +0x4d
#26 56.18 created by cmd/compile/internal/gc.compileFunctions
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:382 +0x129
#26 56.18
#26 56.18 goroutine 30 [runnable]:
#26 56.18 cmd/compile/internal/ssa.fuseBlockPlain(0xc000a1f448, 0xc000b37900)
#26 56.18 /usr/local/go/src/cmd/compile/internal/ssa/fuse.go:218 +0x5c5
#26 56.18 cmd/compile/internal/ssa.fuse(0xc0006131e0, 0x3166cd7205)
#26 56.18 /usr/local/go/src/cmd/compile/internal/ssa/fuse.go:40 +0xb3
#26 56.18 cmd/compile/internal/ssa.fuseEarly(0xc0006131e0)
#26 56.18 /usr/local/go/src/cmd/compile/internal/ssa/fuse.go:12 +0x30
#26 56.18 cmd/compile/internal/ssa.Compile(0xc0006131e0)
#26 56.18 /usr/local/go/src/cmd/compile/internal/ssa/compile.go:96 +0x98d
#26 56.18 cmd/compile/internal/gc.buildssa(0xc000488160, 0x3, 0x0)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/ssa.go:470 +0x11ba
#26 56.18 cmd/compile/internal/gc.compileSSA(0xc000488160, 0x3)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:319 +0x5d
#26 56.18 cmd/compile/internal/gc.compileFunctions.func2(0xc0008e5aa0, 0xc0000c0650, 0x3)
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:384 +0x4d
#26 56.18 created by cmd/compile/internal/gc.compileFunctions
#26 56.18 /usr/local/go/src/cmd/compile/internal/gc/pgen.go:382 +0x129
#26 56.18
#26 56.18 rax 0xc
#26 56.18 rbx 0x9
#26 56.18 rcx 0x4027686e99
#26 56.18 rdx 0x4027680998
#26 56.18 rdi 0x0
#26 56.18 rsi 0x86b5a4570ee0c
#26 56.18 rbp 0x4027686b11
#26 56.18 rsp 0x15
#26 56.18 r8 0xc000431260
#26 56.18 r9 0xc00043ade0
#26 56.18 r10 0xc00074a000
#26 56.18 r11 0xc00077e180
#26 56.18 r12 0x0
#26 56.18 r13 0x0
#26 56.18 r14 0x0
#26 56.18 r15 0x0
#26 56.18 rip 0x40276809ed
#26 56.18 rflags 0xb
#26 56.18 cs 0x656
#26 56.18 fs 0x40
#26 56.18 gs 0x2768
#26 75.56 make: *** [GNUmakefile:120: tools] Error 2
------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c make tools]: exit code: 2
Same issue here using Elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0 volumes: - ./configs/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:cached - ./configs/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:cached ports: - 9200:9200 healthcheck: test: curl http://127.0.0.1:9200/_cat/health interval: 5s timeout: 10s retries: 5 environment: http.host: "0.0.0.0" transport.host: "127.0.0.1" ES_JAVA_OPTS: "-Xms512m -Xmx512m" xpack.security.enabled: "false" networks: default: aliases: - $host_elasticsearch ~```
Having exctaly the same problem...
Did somone made it work with --platform linux/amd64 build?
This is a qemu bug, which is the upstream component we use for running Intel (amd64) containers on M1 (arm64) chips, and is unfortunately not something we control. In general we recommend running arm64 containers on M1 chips because (even ignoring any crashes) they will always be faster and use less memory.
Please encourage the author of this container to supply an arm64 or multi-arch image, not just an Intel one. Now that M1 is a mainstream platform, we think that most container authors will be keen to do this.
@Andriejka FYI same thing with platform: linux/amd64
I updated to the latest version of ES and the error went away :)
@stephen-turner are there any plans to use qemu 6? On changelog for 6.0 rc0, they say that now support mac m1 chips
QEMU now supports emulation of the Arm-v8.1M architecture and the Cortex-M55 CPU
I understand that is major release, but maybe :)
@stephen-turner are there any plans to use qemu 6? On changelog for 6.0 rc0, they say that now support mac m1 chips
QEMU now supports emulation of the Arm-v8.1M architecture and the Cortex-M55 CPU
I understand that is major release, but maybe :)
We generally track our upstreams, but only when they're released, not betas/RCs.
However, I'm not sure it helps anyway. We are not wanting to emulate M1, we are running on M1 and emulating Intel. Also if https://en.wikipedia.org/wiki/ARM_architecture#Cores is correct, the M1 chip is the Arm-v8.6A architecture not 8.4M.
I agree with @stephen-turner, this particular changelog entry will probably not affect the problem here.
I tried qemu 6.0.0-rc2 on linux/aarch64 in a vm running on Apple Silicon nontheless and I'm still experiencing a similar problem when running ./gradlew bootBuildImage
(which uses paketo-buildpacks
):
> Pulling builder image 'docker.io/paketobuildpacks/builder:base' ..................................................
> Pulled builder image 'paketobuildpacks/builder@sha256:e19f8c5df2dc7d6b0efd1c8fcd7ffc546cf3c16e0f238d0eb9084781d2c3ad41'
> Pulling run image 'docker.io/paketobuildpacks/run:base-cnb' ..................................................
> Pulled run image 'paketobuildpacks/run@sha256:235853acae3609e38e176cc6fb54c04535d44e26e46739aebf0374fe74fd6291'
> Executing lifecycle version v0.11.1
> Using build cache volume 'pack-cache-18d2320494d4.build'
> Running creator
[creator] ===> DETECTING
[creator] ======== Output: paketo-buildpacks/procfile@4.0.0 ========
[creator] qemu: uncaught target signal 11 (Segmentation fault) - core dumped
[creator] ======== Output: paketo-buildpacks/environment-variables@3.0.0 ========
[creator] qemu: uncaught target signal 11 (Segmentation fault) - core dumped
[creator] ======== Output: paketo-buildpacks/image-labels@3.0.0 ========
[creator] qemu: uncaught target signal 11 (Segmentation fault) - core dumped
[creator] err: paketo-buildpacks/procfile@4.0.0
[creator] err: paketo-buildpacks/environment-variables@3.0.0
[creator] err: paketo-buildpacks/image-labels@3.0.0
[creator] ======== Output: paketo-buildpacks/procfile@4.0.0 ========```
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
docker version 20.10.5, build 55c4c88 mac silicon. Experiencing similar issue
Realized that I should update on my status: After @stephen-turner 's comments I realized that we were using a number of old images which weren't built to support the M1 architecture. After upgrading images we've had a lot of success with the new version of Docker for Mac
For people that need to run ES 6.x images for various reasons these are working for me fine on my M1 MacBook (Docker for desktop 3.3.1, BigSur)
@zeljkokalezic thanks for the tip worked for me with some borrowed environment variables from @dprandzioch docker-compose yaml
This is a qemu bug, which is the upstream component we use for running Intel (amd64) containers on M1 (arm64) chips, and is unfortunately not something we control.
@stephen-turner Do you know whether this bug has been filed with qemu
and, if so, do you have a link?
Closed issues are locked after 30 days of inactivity. This helps our team focus on active issues.
If you have found a problem that seems similar to this, please open a new issue.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle locked
Running the Docker for Mac preview on M1 MacBook Pro
Expected behavior
Ran
docker-compose up
. Expected node app + dependencies (mongodb and kafka/zookeeper) to start upActual behavior
Got the following log / error:
Information
Ran
docker-compose up
twice and it happened both timesI've been using a remote server to run this project with docker / docker-compose without any problems
macOS Version: 11.0.1
Couldn't find "Diagnose & Feedback"
Steps to reproduce the behavior
Dockerfile
Dockerfile
fromnode-onbuild
repo: