Open edeandrea opened 4 days ago
i can at least give this a try. any chance, however, that this is an arch issue with the image?
All of the images that are being pulled in this case are built as multi-arch images, meaning they have both linux/amd64
and linux/arm64
platform variants with a manifest.
See https://quay.io/repository/quarkus-super-heroes/rest-villains/manifest/sha256:a3d658e8f9935a98b893921152e5a5192ff2cc9187af0c915cad069943dd4eac for example
For what it's worth I have the same exact issue. I have to manually pull images to make them whenever locally before running compose. Downloads started by compose crash the podman machine with that EOF error
I haven't been able to reproduce using podman 5.1.1 on a mac and setting up/using podman-compose as described in the first post.
After succeeding once, I tried again after removing all the images. Then I also tried podman machine reset
+ podman machine start
, and ran podman compose pull
again, and this also worked
From your symptoms it sounds like gvproxy is crashing/unresponsive. So the first step is to check if gvproxy is still running, likely not in your case.
I was also unable to reproduce. I assume it could be related to network speeds, do you have a fast or slow internet connection?
The one thing to debug is to start podman --log-level debug machine start
, there you should see a full path for gvproxy.log
somewhere in the output. Then just run your reproducer and then if it crashes again take a look at the file or upload it here for us. Note the file will be big as it logs all packages.
I will try it out today.
I assume it could be related to network speeds, do you have a fast or slow internet connection?
I have a 1.4Gbps connection - so pretty fast :)
The one thing to debug is to start
podman --log-level debug machine start
, there you should see a full path forgvproxy.log
somewhere in the output. Then just run your reproducer and then if it crashes again take a look at the file or upload it here for us. Note the file will be big as it logs all packages.
When I start it in debug with podman --log-level debug machine start
, the podman compose pull
works as expected. When I start podman normally (podman machine start
) then it crashes during the podman compose pull
.
After its crashed, the vm is still "up", but it is completely unaccessible. I can't even podman machine ssh
into it.
Yeah that sounds like gvproxy crashing, because all our communication ssh/unix socket gets proxied over it. Without gvproxy there is no networking for the VM.
If --log-level debug doesn't reproduce I wonder if the fact that it has to write so many log lines slows it down enough to no longer hit whatever race this is. 1.4 Gbps speed is certainly not something I can reproduce here.
Whats interesting is i just did a podman pull quay.io/strimzi/kafka:0.34.0-kafka-3.4.0
. I didn't get any error messages and the terminal hung, but the vm crashed and became totally unresponsive.
╰─ podman pull quay.io/strimzi/kafka:0.34.0-kafka-3.4.0
Trying to pull quay.io/strimzi/kafka:0.34.0-kafka-3.4.0...
Getting image source signatures
Copying blob sha256:fd472bf0e5350a58938a790a3cca11cbc9110ba69623dd1e12885cf451db5ac6
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:73eb70c13411156096e1118a9437ac965a5b617787d19f03ddd27a0de9d20ec6
Copying blob sha256:b2f02f84fc56d045320f9e28c38d8cab46e794fe3c6ab7d01a06ca2f51c24a83
Copying blob sha256:c19e3e0a2e6d2a52c9d05b1f8ae479c00fb0c5b34d812cffb9e16dbaac231ec9
Copying blob sha256:8f427bd5e9bc8b7e9cf027c4f9bea592194432a7102067cf31e08bf0fa22087e
Copying blob sha256:6b07c69f4ddc8550cb08df557afce56988010868b97f964112c474753174ca59
Copying blob sha256:c1ac0dbf18e571305a3ed4484e321c24a4f4c44f720b436d58b16ee9b5b2b28c
Copying blob sha256:504b772b0ac13bae123c50614ad3f0a2814720c17633a696ffd0cc6952805dc9
Copying blob sha256:b18ade3c1cf8cd0a0f7deb12459f439eedca8288ef126b7aa908cb38850d01d5
Copying blob sha256:919eb322bc96dd7f8eb603ce9f2ae063ecdbbb2291ffe7ccd05d82d2ac0e557f
Copying blob sha256:84a1c9b46d22e38dc7f2ee890e7ccdbc5816fd8d2cb2f59f55e6e0d2b449ecec
Copying blob sha256:ded5ff4d5559f92dbeb2ad47b070dc7577eaca5f9f883cdad7bd485a568804fd
and then in another terminal...
╰─ podman machine ssh
Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
ssh: connect to host localhost port 50860: Connection refused
Yeah I think this what https://github.com/containers/podman/issues/22284 talks about, I would assume they have the same root cause just slightly different symptoms maybe.
For whatever reason it seems podman does not like that kafka container (or any other kind of kafka container)...
if you
git clone https://github.com/quarkusio/quarkus-super-heroes
cd quarkus-super-heroes/rest-fights
./mvnw clean test
I see testcontainers getting hosed due to podman crashing while trying to pull the docker.io/vectorized/redpanda:v24.1.2
image
08:34:05 INFO [or.te.do.DockerClientProviderStrategy] (build-37) Loaded org.testcontainers.dockerclient.UnixSocketClientProviderStrategy from ~/.testcontainers.properties, will try it first
08:34:05 WARN [or.te.do.DockerClientProviderStrategy] (build-37) DOCKER_HOST tcp://127.0.0.1:49170 is not listening
08:34:05 WARN [or.te.do.DockerClientProviderStrategy] (build-37) DOCKER_HOST tcp://127.0.0.1:49170 is not listening
08:34:05 INFO [or.te.do.DockerClientProviderStrategy] (build-37) Found Docker environment with local Unix socket (unix:///var/run/docker.sock)
08:34:05 INFO [or.te.DockerClientFactory] (build-37) Docker host IP address is localhost
08:34:05 INFO [or.te.DockerClientFactory] (build-37) Connected to docker:
Server Version: 5.1.1
API Version: 1.41
Operating System: fedora
Total Memory: 7360 MB
08:34:05 INFO [or.te.im.PullPolicy] (build-37) Image pull policy will be performed by: DefaultPullPolicy()
08:34:05 INFO [or.te.ut.ImageNameSubstitutor] (build-37) Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor')
08:34:05 INFO [or.te.DockerClientFactory] (build-37) Checking the system...
08:34:05 INFO [or.te.DockerClientFactory] (build-37) ✔︎ Docker server version should be at least 1.6.0
08:34:05 INFO [tc.qu.io.4.2.Final] (build-37) Creating container for image: quay.io/apicurio/apicurio-registry-mem:2.4.2.Final
08:34:05 INFO [or.te.ut.RegistryAuthLocator] (build-37) Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: quay.io/apicurio/apicurio-registry-mem:2.4.2.Final, configFile: /Users/edeandre/.docker/config.json, configEnv: DOCKER_AUTH_CONFIG). Falling back to docker-java default behaviour. Exception message: Status 404: No config supplied. Checked in order: /Users/edeandre/.docker/config.json (file not found), DOCKER_AUTH_CONFIG (not set)
08:34:05 INFO [tc.do.io.1.2] (build-40) Pulling docker image: docker.io/vectorized/redpanda:v24.1.2. Please be patient; this may take some time but only needs to be done once.
08:34:05 INFO [or.te.ut.RegistryAuthLocator] (build-40) Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: docker.io/vectorized/redpanda:latest, configFile: /Users/edeandre/.docker/config.json, configEnv: DOCKER_AUTH_CONFIG). Falling back to docker-java default behaviour. Exception message: Status 404: No config supplied. Checked in order: /Users/edeandre/.docker/config.json (file not found), DOCKER_AUTH_CONFIG (not set)
08:34:05 INFO [tc.do.io.4] (build-26) Creating container for image: docker.io/mongo:4.4
08:34:05 INFO [tc.te.7.0] (build-37) Creating container for image: testcontainers/ryuk:0.7.0
08:34:05 INFO [or.te.ut.RegistryAuthLocator] (build-37) Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: testcontainers/ryuk:0.7.0, configFile: /Users/edeandre/.docker/config.json, configEnv: DOCKER_AUTH_CONFIG). Falling back to docker-java default behaviour. Exception message: Status 404: No config supplied. Checked in order: /Users/edeandre/.docker/config.json (file not found), DOCKER_AUTH_CONFIG (not set)
08:34:05 INFO [tc.te.7.0] (build-37) Container testcontainers/ryuk:0.7.0 is starting: c1e847ba9f5ca69db0e565c07989c0f5f769036bd7578686effcc70224e25004
08:34:06 INFO [tc.te.7.0] (build-37) Container testcontainers/ryuk:0.7.0 started in PT0.611125S
08:34:06 INFO [tc.do.io.1.2] (docker-java-stream-1914008895) Starting to pull image
08:34:06 INFO [tc.qu.io.4.2.Final] (build-37) Container quay.io/apicurio/apicurio-registry-mem:2.4.2.Final is starting: 960558fe019c1b5d4eb58858439eecb749d86d04a179f49eacf76b5b3d9c9708
08:34:06 INFO [tc.do.io.4] (build-26) Container docker.io/mongo:4.4 is starting: 8478c2e58530483d14fe2002100548b39a16379c6c995c5dfedfcf3e5cce9488
08:34:06 INFO [tc.do.io.1.2] (docker-java-stream-1914008895) Pulling image layers: 4 pending, 1 downloaded, 0 extracted, (0 bytes/? MB)
08:34:06 ERROR [co.gi.do.ap.as.ResultCallbackTemplate] (docker-java-stream-1914008895) Error during callback: com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.ConnectionClosedException: Premature end of chunk coded message body: closing chunk expected
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:263)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:222)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:183)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.io.EofSensorInputStream.read(EofSensorInputStream.java:135)
at org.testcontainers.shaded.com.fasterxml.jackson.core.json.UTF8StreamJsonParser._loadMore(UTF8StreamJsonParser.java:204)
at org.testcontainers.shaded.com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipWSOrEnd2(UTF8StreamJsonParser.java:2978)
at org.testcontainers.shaded.com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipWSOrEnd(UTF8StreamJsonParser.java:2973)
at org.testcontainers.shaded.com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:731)
at org.testcontainers.shaded.com.fasterxml.jackson.databind.MappingIterator.hasNextValue(MappingIterator.java:240)
at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:314)
at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:298)
at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:275)
at java.base/java.lang.Thread.run(Thread.java:1583)
08:34:06 ERROR [tc.qu.io.4.2.Final] (build-37) Could not start container: java.lang.RuntimeException: com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.NoHttpResponseException: localhost:2375 failed to respond
at com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl.execute(ApacheDockerHttpClientImpl.java:210)
at com.github.dockerjava.zerodep.ZerodepDockerHttpClient.execute(ZerodepDockerHttpClient.java:8)
at org.testcontainers.dockerclient.HeadersAddingDockerHttpClient.execute(HeadersAddingDockerHttpClient.java:23)
at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.execute(DefaultInvocationBuilder.java:228)
at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.post(DefaultInvocationBuilder.java:102)
at org.testcontainers.shaded.com.github.dockerjava.core.exec.StartContainerCmdExec.execute(StartContainerCmdExec.java:31)
at org.testcontainers.shaded.com.github.dockerjava.core.exec.StartContainerCmdExec.execute(StartContainerCmdExec.java:13)
at org.testcontainers.shaded.com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21)
at org.testcontainers.shaded.com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:33)
at org.testcontainers.shaded.com.github.dockerjava.core.command.StartContainerCmdImpl.exec(StartContainerCmdImpl.java:42)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:452)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:330)
at io.quarkus.apicurio.registry.devservice.DevServicesApicurioRegistryProcessor.lambda$startApicurioRegistry$1(DevServicesApicurioRegistryProcessor.java:184)
at java.base/java.util.Optional.orElseGet(Optional.java:364)
at io.quarkus.apicurio.registry.devservice.DevServicesApicurioRegistryProcessor.startApicurioRegistry(DevServicesApicurioRegistryProcessor.java:176)
at io.quarkus.apicurio.registry.devservice.DevServicesApicurioRegistryProcessor.startApicurioRegistryDevService(DevServicesApicurioRegistryProcessor.java:84)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at io.quarkus.deployment.ExtensionLoader$3.execute(ExtensionLoader.java:849)
at io.quarkus.builder.BuildContext.run(BuildContext.java:256)
at org.jboss.threads.ContextHandler$1.runWith(ContextHandler.java:18)
at org.jboss.threads.EnhancedQueueExecutor$Task.doRunWith(EnhancedQueueExecutor.java:2516)
at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2495)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1521)
at java.base/java.lang.Thread.run(Thread.java:1583)
at org.jboss.threads.JBossThread.run(JBossThread.java:483)
Caused by: com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.NoHttpResponseException: localhost:2375 failed to respond
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.DefaultHttpResponseParser.createConnectionClosedException(DefaultHttpResponseParser.java:87)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:243)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:53)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:187)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.HttpRequestExecutor.execute(HttpRequestExecutor.java:175)
at com.github.dockerjava.zerodep.HijackingHttpRequestExecutor.execute(HijackingHttpRequestExecutor.java:50)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.HttpRequestExecutor.execute(HttpRequestExecutor.java:218)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager$InternalConnectionEndpoint.execute(PoolingHttpClientConnectionManager.java:596)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalExecRuntime.execute(InternalExecRuntime.java:215)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.MainClientExec.execute(MainClientExec.java:107)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ConnectExec.execute(ConnectExec.java:181)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ProtocolExec.execute(ProtocolExec.java:172)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.HttpRequestRetryExec.execute(HttpRequestRetryExec.java:93)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ContentCompressionExec.execute(ContentCompressionExec.java:128)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.RedirectExec.execute(RedirectExec.java:116)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient.doExecute(InternalHttpClient.java:178)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.CloseableHttpClient.execute(CloseableHttpClient.java:67)
at com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl.execute(ApacheDockerHttpClientImpl.java:206)
... 28 more
The EOF happened to me with a plain nodejs image
Unfortunately until this issue is resolved I need to uninstall podman completely from my machine and install Docker Desktop instead. This issue prevents me from doing my day-to-day job.
@riccardo-forina can you describe the exact steps you took to get the EOF issue? I want to try it and see if I see the same.
❯ cat docker-compose.yml
version: '3.8'
services:
postgresql:
image: postgres:14
hostname: postgresql
volumes:
- pg_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: "conduktor-platform"
POSTGRES_USER: "conduktor"
POSTGRES_PASSWORD: "change_me"
POSTGRES_HOST_AUTH_METHOD: "scram-sha-256"
ports:
- "5432:5432"
conduktor-platform:
image: conduktor/conduktor-platform:1.19.0
depends_on:
- postgresql
ports:
- "8081:8080"
volumes:
- conduktor_data:/var/conduktor
environment:
CDK_DATABASE_URL: "postgresql://conduktor:change_me@postgresql:5432/conduktor-platform"
CDK_MONITORING_CORTEX-URL: http://conduktor-monitoring:9009/
CDK_MONITORING_ALERT-MANAGER-URL: http://conduktor-monitoring:9010/
CDK_MONITORING_CALLBACK-URL: http://conduktor-platform:8080/monitoring/api/
CDK_MONITORING_NOTIFICATIONS-CALLBACK-URL: http://localhost:8080
healthcheck:
test: curl -f http://localhost:8080/platform/api/modules/health/live || exit 1
interval: 10s
start_period: 10s
timeout: 5s
retries: 3
conduktor-monitoring:
image: conduktor/conduktor-platform-cortex:1.19.0
environment:
CDK_CONSOLE-URL: "http://conduktor-platform:8080"
volumes:
pg_data: {}
conduktor_data: {}
~
❯ podman compose up
>>>> Executing external compose provider "/opt/homebrew/bin/docker-compose". Please refer to the documentation for details. <<<<
WARN[0000] /Users/riccardoforina/docker-compose.yml: `version` is obsolete
[+] Running 17/29
⠼ conduktor-monitoring [⠀⣄⣿⣿⣄⣿⣿⣿] 39.72MB / 185.5MB Pulling 5.5s
⠧ a8d641fca972 Downloading [====> ] 10.82MB/118.7MB 1.8s
⠧ bfbe77e41a78 Downloading [====================> ] 11.27MB/27.35MB 1.8s
✔ 33c21a15c349 Download complete 0.9s
✔ ee24ab29acdd Download complete 0.0s
⠦ 101e12ca8f77 Downloading [======================> ] 17.63MB/39.44MB 1.6s
✔ 4452c0ea84e1 Download complete 0.0s
✔ b8515fa7cfe8 Download complete 0.0s
✔ 65b8e95e0848 Download complete 0.5s
⠼ postgresql [⣿⣿⣿⣿⣿⣶⣿⣿⡀⣿⣿⣿⣿⣿] 48.73MB / 133.3MB Pulling 5.5s
✔ 8460f5c0f010 Download complete 0.0s
✔ 3531e6d72caa Download complete 0.2s
✔ 7ac4aa6c99e9 Download complete 0.2s
✔ 059b5c43db41 Download complete 0.0s
✔ f695f6dafe22 Download complete 0.0s
⠹ 559a76444520 Downloading [=========================================> ] 24.05MB/29.18MB 2.2s
✔ c419f8ddd2fb Download complete 0.0s
✔ e54f6c55c74f Download complete 0.0s
⠏ 227694dd68c2 Downloading [===========> ] 24.68MB/104.1MB 2.0s
✔ 89359d0f48ec Download complete 0.0s
✔ 44cbcab12c43 Download complete 0.0s
✔ 9d713930d2a4 Download complete 0.0s
✔ bdd7284ffc93 Download complete 0.0s
✔ 4692faccd974 Download complete 0.0s
⠼ conduktor-platform [⠀⠀⠀⠀] Pulling 5.5s
⠋ 41a62902875f Downloading [==> ] 13.27MB/329.4MB 1.0s
⠴ 1653a56f66b2 Pulling fs layer 0.5s
⠴ e56c9bb4a7dc Pulling fs layer 0.5s
⠹ d94ca0f2e242 Pulling fs layer 0.2s
unexpected EOF
Error: executing /opt/homebrew/bin/docker-compose up: exit status 18
❯ podman machine info
host:
arch: arm64
currentmachine: podman-machine-default
defaultmachine: ""
eventsdir: /var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/storage-run-501/podman
machineconfigdir: /Users/riccardoforina/.config/containers/podman/machine/applehv
machineimagedir: /Users/riccardoforina/.local/share/containers/podman/machine/applehv
machinestate: Running
numberofmachines: 1
os: darwin
vmtype: applehv
version:
apiversion: 5.1.0
version: 5.1.0
goversion: go1.22.3
gitcommit: 4e9486dbc63c24bfe109066abbb54d5d8dc2489e
builttime: Wed May 29 20:52:05 2024
built: 1717008725
osarch: darwin/arm64
os: darwin
~
❯ podman machine ssh
Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
ssh: connect to host localhost port 63252: Connection refused
Basically the machine is now unreachable, although reported as running. I have to manually stop and start it again to get it up again. This happens when I run any compose file with any image. The workaround is to manually pull the images referenced by the compose file before running compose, so to skip the download step.
I have a fast connection as well (1Gbit).
Thanks @riccardo-forina I am actually able to podman compose up
your yaml :/
Basically the machine is now unreachable, although reported as running. I have to manually stop and start it again to get it up again.
This is exactly what happens when it happens to me.
The workaround is to manually pull the images referenced by the compose file before running compose, so to skip the download step.
I've found that even trying to podman pull
some images it still blows up, so at this point I have no workaround.
Yeah, I have the same issue on my Mac OS (M1)
After the failure, the podman machine doesn't work anymore:
➜ quarkus-super-heroes git:(main) podman ps
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:63282: connect: connection refused
So I need to restart the podman desktop.
@edeandrea I am unable to reproduce this problem
podman compose -f deploy/docker-compose/java17.yml pull (base)
>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please refer to the documentation for details. <<<<
WARN[0000] quarkus-super-heroes/deploy/docker-compose/java17.yml: `version` is obsolete
[+] Pulling 113/25
✔ heroes-db Skipped - Image is already being pulled by villains-db 0.0s
✔ rest-villains-java17 Pulled 42.2s
✔ apicurio Pulled 61.9s
✔ villains-db Pulled 45.5s
✔ grpc-locations-java17 Pulled 32.3s
✔ event-statistics-java17 Pulled 35.9s
✔ ui-super-heroes-java17 Pulled 23.4s
✔ locations-db Pulled 48.4s
✔ rest-fights-java17 Pulled 60.3s
✔ fights-db Pulled 66.1s
✔ rest-heroes-java17 Pulled 40.0s
✔ fights-kafka Pulled 60.4s
✔ rest-narration-java17 Pulled
podman version: 5.0.3
I do have a bigger podman machine though:
CPUS:6,
MEMORY:8G,
DISK SIZE:100Gig
platform: MacOS - m3
I'm using the same size - 6 cpus, 8g memory, 100G disk. I'm using podman 5.1.1.
Just upgraded and recreated the machine with the following version info. Same result.
Client: Podman Engine
Version: 5.1.1
API Version: 5.1.1
Go Version: go1.22.3
Git Commit: bda6eb03dcbcf12a5b7ae004c1240e38dd056d24
Built: Tue Jun 4 21:54:07 2024
OS/Arch: darwin/arm64
Server: Podman Engine
Version: 5.1.1
API Version: 5.1.1
Go Version: go1.22.3
Built: Tue Jun 4 02:00:00 2024
OS/Arch: linux/arm64
Tested again today with a m1 machine and a wired connection (gives me 800Mbps on speedtest). I tried the 2 podman compose
files from this issue, and I also podman pull
ed the kafka image mentioned here, and all of these worked for me :-/
Can anyone check ps aux |grep vfkit
and ps aux |grep gvproxy
when the issue happens and the podman machine VM is unreachable?
❯ ps aux |grep vfkit
riccardoforina 23076 0.0 0.1 411951872 10192 s006 S 4:06PM 0:03.00 /opt/homebrew/Cellar/podman/5.1.0/libexec/podman/vfkit --cpus 4 --memory 8184 --bootloader efi,variable-store=/Users/riccardoforina/.local/share/containers/podman/machine/applehv/efi-bl-podman-machine-default,create --device virtio-blk,path=/Users/riccardoforina/.local/share/containers/podman/machine/applehv/podman-machine-default-arm64.raw --device virtio-rng --device virtio-serial,logFilePath=/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/podman-machine-default.log --device virtio-vsock,port=1025,socketURL=/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/podman-machine-default.sock,listen --device rosetta,mountTag=rosetta,install --device virtio-net,unixSocketPath=/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/podman-machine-default-gvproxy.sock,mac=5a:94:ef:e4:0c:ee --device virtio-fs,sharedDir=/Users,mountTag=a2a0ee2c717462feb1de2f5afd59de5fd2d8 --device virtio-fs,sharedDir=/private,mountTag=71708eb255bc230cd7c91dd26f7667a7b938 --device virtio-fs,sharedDir=/var/folders,mountTag=a0bb3a2c8b0b02ba5958b0576f0d6530e104 --restful-uri tcp://localhost:63259
riccardoforina 56996 0.0 0.0 410734288 1552 s012 S+ 9:36AM 0:00.00 grep vfkit
❯ ps aux |grep gvproxy
riccardoforina 23076 0.0 0.1 411951872 10928 s006 S 4:06PM 0:03.01 /opt/homebrew/Cellar/podman/5.1.0/libexec/podman/vfkit --cpus 4 --memory 8184 --bootloader efi,variable-store=/Users/riccardoforina/.local/share/containers/podman/machine/applehv/efi-bl-podman-machine-default,create --device virtio-blk,path=/Users/riccardoforina/.local/share/containers/podman/machine/applehv/podman-machine-default-arm64.raw --device virtio-rng --device virtio-serial,logFilePath=/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/podman-machine-default.log --device virtio-vsock,port=1025,socketURL=/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/podman-machine-default.sock,listen --device rosetta,mountTag=rosetta,install --device virtio-net,unixSocketPath=/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/podman-machine-default-gvproxy.sock,mac=5a:94:ef:e4:0c:ee --device virtio-fs,sharedDir=/Users,mountTag=a2a0ee2c717462feb1de2f5afd59de5fd2d8 --device virtio-fs,sharedDir=/private,mountTag=71708eb255bc230cd7c91dd26f7667a7b938 --device virtio-fs,sharedDir=/var/folders,mountTag=a0bb3a2c8b0b02ba5958b0576f0d6530e104 --restful-uri tcp://localhost:63259
riccardoforina 57279 0.0 0.0 410750672 1680 s012 S+ 9:36AM 0:00.00 grep gvproxy
After further debugging I found that setting the cpus count for the machine to anything above 2, causes the problem. With 1 or 2, all works fine. I'm on a M1, I should have 8 cores available
~
❯ podman machine init test --cpus 2
Looking up Podman Machine image at quay.io/podman/machine-os:5.1 to create VM
Extracting compressed file: test-arm64.raw: done
Machine init complete
To start your machine run:
podman machine start test
~ 12s
❯ podman machine start test
Starting machine "test"
This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:
podman machine set --rootful test
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.
Machine "test" started successfully
~ 11s
❯ podman system connection default test
~
❯ podman compose up
>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please refer to the documentation for details. <<<<
[+] Running 41/40
✔ conduktor-monitoring 9 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 24.7s
✔ a8d641fca972 Download complete 4.4s
✔ 101e12ca8f77 Download complete 3.7s
✔ bfbe77e41a78 Download complete 11.6s
✔ ee24ab29acdd Download complete 0.1s
✔ 33c21a15c349 Download complete 0.0s
✔ 4452c0ea84e1 Download complete 0.1s
✔ b8515fa7cfe8 Download complete 1.3s
✔ 65b8e95e0848 Download complete 0.4s
✔ e1babfdf69ee Download complete 0.0s
✔ postgresql 15 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 25.7s
✔ 7ac4aa6c99e9 Download complete 0.1s
✔ f695f6dafe22 Download complete 0.0s
✔ 559a76444520 Download complete 20.1s
✔ 3531e6d72caa Download complete 1.9s
✔ 059b5c43db41 Download complete 0.1s
✔ 8460f5c0f010 Download complete 0.6s
✔ e54f6c55c74f Download complete 0.2s
✔ c419f8ddd2fb Download complete 0.1s
✔ 227694dd68c2 Download complete 6.9s
✔ 89359d0f48ec Download complete 4.5s
✔ 9d713930d2a4 Download complete 4.5s
✔ 44cbcab12c43 Download complete 4.4s
✔ 4692faccd974 Download complete 1.0s
✔ bdd7284ffc93 Download complete 0.0s
✔ 08c67119d9e9 Download complete 0.0s
✔ conduktor-platform 14 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 25.7s
✔ 41a62902875f Download complete 8.9s
✔ e56c9bb4a7dc Download complete 0.4s
✔ 3a4dce1e12d5 Download complete 5.1s
✔ 1653a56f66b2 Download complete 5.1s
✔ d94ca0f2e242 Download complete 5.0s
✔ 91af27173303 Download complete 2.1s
✔ 31e75b73d9e9 Download complete 5.1s
✔ 7124c02b98b4 Download complete 2.0s
✔ 8ea305fe4917 Download complete 0.4s
✔ 505dab3dfbb4 Download complete 0.2s
✔ 4d23a212f33c Download complete 2.6s
✔ e0a99bb7bca2 Download complete 2.3s
✔ 1962f92b8ab3 Download complete 0.3s
✔ 453d43d7e390 Download complete 0.0s
[+] Running 6/6
✔ Network riccardoforina_default Created 0.0s
✔ Volume "riccardoforina_pg_data" Created 0.0s
✔ Volume "riccardoforina_conduktor_data" Created 0.0s
✔ Container riccardoforina-conduktor-monitoring-1 Created 0.1s
✔ Container riccardoforina-postgresql-1 Created 0.1s
✔ Container riccardoforina-conduktor-platform-1 Created 0.0s
Attaching to riccardoforina-conduktor-monitoring-1, riccardoforina-conduktor-platform-1, riccardoforina-postgresql-1
riccardoforina-conduktor-platform-1 | Platform log level set to INFO
riccardoforina-conduktor-monitoring-1 | Platform log level set to INFO
riccardoforina-conduktor-platform-1 | 2024-06-28T08:29:24Z [entrypoint] INFO -
riccardoforina-conduktor-platform-1 | Welcome to Conduktor Console !
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣴⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣴⠟⢻⡇⠀⠀⠀⠀⣠⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣴⡿⠋⠀⢸⣧⣤⣀⡀⠺⢿⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣾⡿⠀⠀⠀⢸⣿⣿⣿⣿⣆⠀⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⠇⠀⠀⢀⣼⣿⣿⣿⣿⣿⣷⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣿⡿⠀⠀⠴⠿⣿⣿⣦⣄⣠⣿⣿⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣾⣿⡇⠀⠀⠀⠀⠀⠈⠉⠉⠛⠛⠿⢿⣦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⢀⡀⠀⠀⠀⠀⢿⣿⣦⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣷⣤⣀⠀⠀⠀⠀⠐⣿⣿⣷⣦⣤⣀⣤⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠚⠛⠛⠛⠛⠛⠛⠂⠀⠀⠀⠘⢿⣿⣿⠋⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⣻⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⢦⣤⣀⡀⠀⠀⢀⣤⣾⣿⠟⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢿⣿⣿⣾⣿⣿⠟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 | ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢿⡿⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
riccardoforina-conduktor-platform-1 |
[...snip...]
^CGracefully stopping... (press Ctrl+C again to force)
Aborting on container exit...
~ 29s
[+] Stopping 3/3
✔ Container riccardoforina-conduktor-monitoring-1 Stopped 3.6s
✔ Container riccardoforina-conduktor-platform-1 Stopped 10.1s
✔ Container riccardoforina-postgresql-1 Stopped 0.2s
canceledarch: _
~
❯ podman machine stop test
Machine "test" stopped successfully
~
❯ podman machine rm test
The following files will be deleted:
/Users/riccardoforina/.config/containers/podman/machine/applehv/test.json
/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test.sock
/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-gvproxy.sock
/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-api.sock
/var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test.log
Are you sure you want to continue? [y/N] y
~
❯ sysctl -n hw.ncpu
8
~
❯ podman machine init test --cpus 3
Looking up Podman Machine image at quay.io/podman/machine-os:5.1 to create VM
Extracting compressed file: test-arm64.raw: done
Machine init complete
To start your machine run:
podman machine start test
~ 11s
❯ podman machine start test
Starting machine "test"
This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:
podman machine set --rootful test
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.
Machine "test" started successfully
~ 11s
❯ podman system connection default test
~
❯ podman compose up
>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please refer to the documentation for details. <<<<
[+] Running 18/33
⠴ conduktor-monitoring 8 layers [⣄⣿⣀⣦⣿⣿⣿⣿] 84.98MB/185.5MB Pulling 4.4s
⠧ a8d641fca972 Downloading [===================> ] 46.64MB/118.7MB 2.6s
✔ 33c21a15c349 Download complete 0.0s
⠼ bfbe77e41a78 Downloading [================> ] 9.132MB/27.35MB 2.4s
⠸ 101e12ca8f77 Downloading [=====================================> ] 29.21MB/39.44MB 2.2s
✔ 4452c0ea84e1 Download complete 0.0s
✔ ee24ab29acdd Download complete 0.1s
✔ b8515fa7cfe8 Download complete 0.0s
✔ 65b8e95e0848 Download complete 1.0s
⠴ postgresql 12 layers [⣿⠀⣿⣿⣷⣿⣿⣿⠀⣿⠀⠀] 22.34MB/141.4MB Pulling 4.4s
✔ 8460f5c0f010 Download complete 0.3s
⠙ 559a76444520 Downloading [=====> ] 3.357MB/29.18MB 2.1s
✔ 059b5c43db41 Download complete 0.1s
✔ 7ac4aa6c99e9 Download complete 1.1s
⠋ 3531e6d72caa Downloading [============================================> ] 7.117MB/8.069MB 1.9s
✔ c419f8ddd2fb Download complete 0.1s
✔ f695f6dafe22 Download complete 0.3s
✔ e54f6c55c74f Download complete 0.2s
⠼ 227694dd68c2 Downloading [=====> ] 11.86MB/104.1MB 1.3s
✔ 89359d0f48ec Download complete 0.2s
⠇ 44cbcab12c43 Pulling fs layer 0.7s
⠸ 9d713930d2a4 Pulling fs layer 0.2s
⠴ conduktor-platform 10 layers [⠀⣶⣿⣿⣿⣿⣿⠀⣿⠀] 80.49MB/660MB Pulling 4.4s
⠧ 41a62902875f Downloading [======> ] 42.44MB/329.4MB 2.6s
⠼ 1653a56f66b2 Downloading [========================================> ] 32.22MB/39.67MB 2.4s
✔ e56c9bb4a7dc Download complete 0.3s
✔ d94ca0f2e242 Download complete 0.7s
✔ 3a4dce1e12d5 Download complete 0.0s
✔ 7124c02b98b4 Download complete 0.1s
✔ 91af27173303 Download complete 0.1s
⠼ 31e75b73d9e9 Downloading [=> ] 5.823MB/290.9MB 1.4s
✔ 8ea305fe4917 Download complete 0.7s
⠼ 505dab3dfbb4 Pulling fs layer 0.3s
unexpected EOF
Error: executing /usr/local/bin/docker-compose up: exit status 18
~
❯ system_profiler SPHardwareDataType | grep "Total Number of Cores"
Total Number of Cores: 8 (4 performance and 4 efficiency)
Debugged this with @cfergeau, we used a debug version of the gvproxy and managed to capture the error in a video, and collect some logs
https://github.com/containers/podman/assets/966316/fdbaa2a9-2ed5-48d7-befe-5b6213649c28
time="2024-06-28T10:58:49+02:00" level=info msg="waiting for clients..."
time="2024-06-28T10:58:50+02:00" level=info msg="new connection from /Users/riccardoforina/Library/Application Support/vfkit/net-15351-479784434.sock to /var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-gvproxy.sock"
time="2024-06-28T10:59:10+02:00" level=error msg="write unixgram /var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-gvproxy.sock->/Users/riccardoforina/Library/Application Support/vfkit/net-15351-479784434.sock: sendto: no buffer space available"
time="2024-06-28T10:59:10+02:00" level=error msg="cannot receive packets from /Users/riccardoforina/Library/Application Support/vfkit/net-15351-479784434.sock, disconnecting: cannot read size from socket: read unixgram /var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-gvproxy.sock: use of closed network connection"
time="2024-06-28T10:59:10+02:00" level=error msg="error closing unixgram:///var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-gvproxy.sock: \"close unixgram /var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-gvproxy.sock: use of closed network connection\""
time="2024-06-28T10:59:10+02:00" level=error msg="gvproxy exiting: cannot read size from socket: read unixgram /var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-gvproxy.sock: use of closed network connection"
It appears to be linked to the network speed. If it's fast enough, the network buffer can get saturated and cause the crash. Having more than one cpu assigned to the machine exacerbates the problem.
The issue is indeed
time="2024-06-28T10:59:10+02:00" level=error msg="write unixgram /var/folders/09/9bv34hm11vb94tmwhtqyyx880000gn/T/podman/test-gvproxy.sock->/Users/riccardoforina/Library/Application Support/vfkit/net-15351-479784434.sock: sendto: no buffer space available"
I was seeing this when I added vfkit support to gvisor-tap-vsock until I added https://github.com/containers/gvisor-tap-vsock/blob/6dbbe087eb62775e99abc69ac232d13f74cac73a/pkg/transport/unixgram_darwin.go#L24-L30 This is unfortunately not good enough, and the maximum for these values is 810241024, and Riccardo is still having this issue with the maximum. If I remember correctly, the "buffer is full" error were coming from the tx/rx functions in https://github.com/containers/gvisor-tap-vsock/blob/main/pkg/tap/switch.go
I've filed this in gvisor-tap-vsock: https://github.com/containers/gvisor-tap-vsock/issues/367
Thank you both @riccardo-forina and @cfergeau for troubleshooting!
FWIW...
╰─ ps aux |grep vfkit
edeandre 33136 0.0 0.1 411958368 19776 ?? S 8:54AM 0:14.92 /opt/podman/bin/vfkit --cpus 6 --memory 7629 --bootloader efi,variable-store=/Users/edeandre/.local/share/containers/podman/machine/applehv/efi-bl-podman-machine-default,create --device virtio-blk,path=/Users/edeandre/.local/share/containers/podman/machine/applehv/podman-machine-default-arm64.raw --device virtio-rng --device virtio-serial,logFilePath=/var/folders/6j/dk6dwmgd7874pjx7s9dknhxw0000gn/T/podman/podman-machine-default.log --device virtio-vsock,port=1025,socketURL=/var/folders/6j/dk6dwmgd7874pjx7s9dknhxw0000gn/T/podman/podman-machine-default.sock,listen --device rosetta,mountTag=rosetta,install --device virtio-net,unixSocketPath=/var/folders/6j/dk6dwmgd7874pjx7s9dknhxw0000gn/T/podman/podman-machine-default-gvproxy.sock,mac=5a:94:ef:e4:0c:ee --device virtio-fs,sharedDir=/Users,mountTag=a2a0ee2c717462feb1de2f5afd59de5fd2d8 --device virtio-fs,sharedDir=/private,mountTag=71708eb255bc230cd7c91dd26f7667a7b938 --device virtio-fs,sharedDir=/var/folders,mountTag=a0bb3a2c8b0b02ba5958b0576f0d6530e104 --restful-uri tcp://localhost:50928
edeandre 93447 0.0 0.0 410733264 1488 s001 S+ 8:21AM 0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox vfkit
╭─ ~/workspaces/quarkus/quarkus-super-heroes-main main ························································· ✔ system Node
╰─ ps aux |grep gvproxy
edeandre 33136 0.0 0.1 411958368 19776 ?? S 8:54AM 0:14.92 /opt/podman/bin/vfkit --cpus 6 --memory 7629 --bootloader efi,variable-store=/Users/edeandre/.local/share/containers/podman/machine/applehv/efi-bl-podman-machine-default,create --device virtio-blk,path=/Users/edeandre/.local/share/containers/podman/machine/applehv/podman-machine-default-arm64.raw --device virtio-rng --device virtio-serial,logFilePath=/var/folders/6j/dk6dwmgd7874pjx7s9dknhxw0000gn/T/podman/podman-machine-default.log --device virtio-vsock,port=1025,socketURL=/var/folders/6j/dk6dwmgd7874pjx7s9dknhxw0000gn/T/podman/podman-machine-default.sock,listen --device rosetta,mountTag=rosetta,install --device virtio-net,unixSocketPath=/var/folders/6j/dk6dwmgd7874pjx7s9dknhxw0000gn/T/podman/podman-machine-default-gvproxy.sock,mac=5a:94:ef:e4:0c:ee --device virtio-fs,sharedDir=/Users,mountTag=a2a0ee2c717462feb1de2f5afd59de5fd2d8 --device virtio-fs,sharedDir=/private,mountTag=71708eb255bc230cd7c91dd26f7667a7b938 --device virtio-fs,sharedDir=/var/folders,mountTag=a0bb3a2c8b0b02ba5958b0576f0d6530e104 --restful-uri tcp://localhost:50928
edeandre 93601 0.0 0.0 410592976 1120 s001 R+ 8:21AM 0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox gvproxy
I never thought I'd hear someone say
Can you please slow your machine down - its too fast
Added to our sprint for review/investigation: https://github.com/orgs/crc-org/projects/1?pane=issue&itemId=69134292
Issue Description
I've installed
podman compose
according to the instructions at https://podman-desktop.io/docs/compose/setting-up-compose.When I try to run
podman compose up
for certain compose yaml files, it errors out. If I do apodman pull
of all of the images in the compose file one at a time, then thepodman compose up
seems to work.Even doing
podman compose pull
seems to blow up.Steps to reproduce the issue
Steps to reproduce the issue
cd quarkus-super-heroes
podman compose -f deploy/docker-compose/java17.yml pull
Describe the results you received
Furthermore, after this happens the podman machine is totally hosed. It is still running but is completely unresponsive. It has to be restarted before it is usable again.
Describe the results you expected
I expect it to work.
podman info output
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
MacOS arm architecture
Additional information
No response