rancher-sandbox / rancher-desktop

Container Management and Kubernetes on the Desktop
https://rancherdesktop.io
Apache License 2.0
5.8k stars 272 forks source link

chown error on bind mount when trying to launch postgres via docker compose #1209

Open valouille opened 2 years ago

valouille commented 2 years ago

Rancher Desktop Version

0.7.1

Rancher Desktop K8s Version

1.22.5

What operating system are you using?

macOS

Operating System / Build Version

macOS Monterey 12.1

What CPU architecture are you using?

arm64 (Apple Silicon)

Windows User Only

No response

Actual Behavior

When trying to launch a Postgres container with a bind mount, it doesn't work because of a chown related error to the folder at startup

Steps to Reproduce

Clone the repo https://github.com/docker/awesome-compose, go to the folder nginx-golang-postgres, edit the file docker-compose.yml to use a bind mount like the following

services:
  backend:
    build: backend
    secrets:
      - db-password
    depends_on:
      - db
  db:
    image: postgres
    restart: always
    secrets:
      - db-password
    volumes:
      - $PWD/db-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=example
      - POSTGRES_PASSWORD_FILE=/run/secrets/db-password
    expose:
      - 5432

  proxy:
    build: proxy
    ports:
      - 8000:8000
    depends_on:
      - backend
#volumes:
#  db-data:
secrets:
  db-password:
    file: db/password.txt

Run the following command: docker compose up

Result

Error response from daemon: error while creating mount source path '~/github.com/docker/awesome-compose/nginx-golang-postgres/db-data': chown ~/github.com/docker/awesome-compose/nginx-golang-postgres/db-data: permission denied

Expected Behavior

I would expect to be able to access locally the folder as a bind mount in order to access and modify the files directly

Additional Information

No response

guild-jonathan-kaczynski commented 2 years ago

Rancher Desktop Version

0.7.1 and 1.0.0-beta.1

Rancher Desktop K8s Version

1.23.1 (latest), using the dockerd (moby) container runtime

What operating system are you using?

macOS

Operating System / Build Version

macOS Catalina 10.15.7 (19H1615)

What CPU architecture are you using?

x86-64 (Intel)

Windows User Only

No response

Actual Behavior

Trying to chown a folder mounted as a volume, from inside the container fails with a permission error.

Steps to Reproduce

Here is some of the output from running the endpoint script manually, if it helps any.

$ mkdir ./postgres

$ ls -ld ./postgres
drwxr-xr-x  2 jonathankaczynski  staff  64 Jan 24 13:38 ./postgres

$ docker run --rm -it \
    --entrypoint /bin/bash \
    -v "$(pwd)/postgres:/var/lib/postgresql/data" \
    postgres
root@98a1f91309fc:/# bash -x /usr/local/bin/docker-entrypoint.sh postgres
… snip …
+ docker_create_db_directories
+ local user
++ id -u
+ user=0
+ mkdir -p /var/lib/postgresql/data
+ chmod 700 /var/lib/postgresql/data
+ mkdir -p /var/run/postgresql
+ chmod 775 /var/run/postgresql
+ '[' -n '' ']'
+ '[' 0 = 0 ']'
+ find /var/lib/postgresql/data '!' -user postgres -exec chown postgres '{}' +
root@ff5dae1c266d:/# find /var/lib/postgresql/data '!' -user postgres
/var/lib/postgresql/data

root@ff5dae1c266d:/# ls -ld /var/lib/postgresql/data
drwx------ 1 501 dialout 64 Jan 24 18:32 /var/lib/postgresql/data

root@ff5dae1c266d:/# exit
exit
$ ls -ld ./postgres
drwx------  2 jonathankaczynski  staff  64 Jan 24 13:32 ./postgres
guild-jonathan-kaczynski commented 2 years ago

Here's a minimal test case derived from the above practical example.

There seem to be two potentially independent mounted volume errors.

The first error relates to the mounted volume not existing on the host os (macos) prior to running the docker command.

$ ls -ld ./foobar
ls: ./foobar: No such file or directory

$ docker run --rm -it -v "$(pwd)/foobar:/opt/foobar" debian:bullseye-slim
docker: Error response from daemon: error while creating mount source path '/Users/jonathankaczynski/foobar': chown /Users/jonathankaczynski/foobar: permission denied.

$ ls -ld ./foobar
drwxr-xr-x  2 jonathankaczynski  staff  64 Jan 24 14:50 ./foobar

$ docker run --rm -it -v "$(pwd)/foobar:/opt/foobar" debian:bullseye-slim

root@392147ccc4f5:/# exit
exit

The second error relates to attempting to change the ownership of the mount point within the docker container. Though changing the file mode succeeds.

$ mkdir ./foobar

$ docker run --rm -it -v "$(pwd)/foobar:/opt/foobar" debian:bullseye-slim

root@cc1c034865bd:/# ls -ld /opt/foobar
drwxr-xr-x 1 501 dialout 64 Jan 24 19:50 /opt/foobar

root@cc1c034865bd:/# groupadd -r postgres --gid=999

root@cc1c034865bd:/# useradd -r -g postgres --uid=999 postgres

root@cc1c034865bd:/# chown postgres /opt/foobar
chown: changing ownership of '/opt/foobar': Permission denied

root@cc1c034865bd:/# chmod 700 /opt/foobar

root@cc1c034865bd:/# exit
exit

$ ls -ld ./foobar
drwx------  2 jonathankaczynski  staff  64 Jan 24 14:50 ./foobar
luiz290788 commented 2 years ago

I'm having the same problem, this is the only thing holding me from using Rancher Desktop. Any progress here?

jeesmon commented 2 years ago

MacOS, RD v1.0.0

Getting permission error when running postgres image with bind-mount (-v $(pwd):/var/lib/pgsql/data)

docker run --rm --name postgresql -e POSTGRESQL_DATABASE=my-db -e POSTGRESQL_USER=user -e POSTGRESQL_PASSWORD=pass -p 5432:5432 -v $(pwd):/var/lib/pgsql/data centos/postgresql-96-centos7

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/pgsql/data/userdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... FATAL:  could not read block 2 in file "base/1/1255_fsm": read only 0 of 8192 bytes
PANIC:  cannot abort transaction 1, it was already committed
child process was terminated by signal 6: Aborted
initdb: removing contents of data directory "/var/lib/pgsql/data/userdata"

But if I use named volume (-v postgres-data:/var/lib/pgsql/data), it works fine

docker run --rm --name postgresql -e POSTGRESQL_DATABASE=my-db -e POSTGRESQL_USER=user -e POSTGRESQL_PASSWORD=pass -p 5432:5432 -v postgres-data:/var/lib/pgsql/data centos/postgresql-96-centos7

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/pgsql/data/userdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

    pg_ctl -D /var/lib/pgsql/data/userdata -l logfile start

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
 done
server started
/var/run/postgresql:5432 - accepting connections
=> sourcing /usr/share/container-scripts/postgresql/start/set_passwords.sh ...
ALTER ROLE
waiting for server to shut down.... done
server stopped
Starting server...
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".

docker volume list
DRIVER    VOLUME NAME
local     postgres-data
petersondrew commented 2 years ago

I think this may be an issue with the underlying lima. I experience the exact same behavior using either Rancher Desktop or colima.

Possibly related lima-vm/lima#504

guild-jonathan-kaczynski commented 2 years ago

They converted that issue into a discussion https://github.com/lima-vm/lima/discussions/505 marked it as answered.

sshfs isn't robust (and fast) enough to be used as /var/lib/postgresql. Please use lima nerdctl volume create to create a named volume inside the guest ext4 filesystem.

To me, it doesn't feel like the answer is addressing the broader concerns raised above.

guild-jonathan-kaczynski commented 2 years ago

There's also this earlier issue thread: https://github.com/lima-vm/lima/issues/231

The last comment, which is from Dec, was:

The plan is to use mapped-xattr or mapped-file of virtio 9P, but the patch is not merged for macOS hosts yet, and seems to need more testers: NixOS/nixpkgs#122420

willcohen commented 2 years ago

As a followup, the latest version of the patch is https://gitlab.com/wwcohen/qemu/-/tree/9p-darwin and that's where the in-progress work will go as it progresses towards resubmission upstream. Any comments on how to improve would be GREATLY welcomed before I submit again.

guild-jonathan-kaczynski commented 2 years ago

From https://github.com/NixOS/nixpkgs/pull/122420, it looks like good progress has been made:

9p-darwin has been merged upstream

I'm also going to close this PR in favor of https://github.com/NixOS/nixpkgs/pull/162243, at this point. I think it's okay if any additional final discussion still happens here since this particular issue has been referenced in so many places, but the work is now done!

willcohen commented 2 years ago

Please let me know if you have any questions!

dennisdaotvlk commented 2 years ago

Facing the same error with docker-compose and docker run -v

![Uploading image.png…]()

marnen commented 2 years ago

Still an issue with Rancher Desktop 1.4.1, exactly as described above. This is the only thing preventing me from using Rancher as opposed to Docker.

laoyaoer commented 2 years ago

Looks like I have encountered the same issue when trying to run gitlab. Unlike oxfs, all files belong to same uid 501 no matter on my Macbook or inside container. so the container keeps restarting and logging this error: directory[/etc/gitlab] (gitlab::default line 35) had an error: Errno::EACCES: Permission denied @ apply2files - /etc/gitlab

ggustafsson commented 2 years ago

This will (probably) be fixed when Lima bumps up to version 1.0.

$ ~/Applications/Rancher\ Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl --version
limactl version 0.9.2-54-ge82db87

Why? Because 9P will be used instead of sshocker by default. Aka virtio-9p-pci in QEMU.

https://github.com/lima-vm/lima/issues/20

https://github.com/lima-vm/lima/blob/master/docs/mount.md

ggustafsson commented 2 years ago

FYI...

$ cat ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml
mountType: 9p

$ docker compose up
[+] Running 4/0
 ⠿ Network rabbitmq_default        Created                                                                                                                                                                        0.0s
 ⠿ Container rabbitmq-rabbitmq1-1  Created                                                                                                                                                                        0.0s
 ⠿ Container rabbitmq-rabbitmq3-1  Created                                                                                                                                                                        0.0s
 ⠿ Container rabbitmq-rabbitmq2-1  Created                                                                                                                                                                        0.0s
Attaching to rabbitmq-rabbitmq1-1, rabbitmq-rabbitmq2-1, rabbitmq-rabbitmq3-1
...
rabbitmq-rabbitmq3-1  |   Starting broker... completed with 4 plugins.
rabbitmq-rabbitmq1-1  |   Starting broker... completed with 4 plugins.
rabbitmq-rabbitmq2-1  |   Starting broker... completed with 4 plugins.

Switching over to 9P does indeed solve part of the problem for me. The chown issue disappears but it was replaced with file create issues in my case, changing to loose permissions on all dirs (777) resolved that however.

laoyaoer commented 2 years ago

FYI...

$ cat ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml
mountType: 9p

$ docker compose up
[+] Running 4/0
 ⠿ Network rabbitmq_default        Created                                                                                                                                                                        0.0s
 ⠿ Container rabbitmq-rabbitmq1-1  Created                                                                                                                                                                        0.0s
 ⠿ Container rabbitmq-rabbitmq3-1  Created                                                                                                                                                                        0.0s
 ⠿ Container rabbitmq-rabbitmq2-1  Created                                                                                                                                                                        0.0s
Attaching to rabbitmq-rabbitmq1-1, rabbitmq-rabbitmq2-1, rabbitmq-rabbitmq3-1
...
rabbitmq-rabbitmq3-1  |   Starting broker... completed with 4 plugins.
rabbitmq-rabbitmq1-1  |   Starting broker... completed with 4 plugins.
rabbitmq-rabbitmq2-1  |   Starting broker... completed with 4 plugins.

Switching over to 9P does indeed solve part of the problem for me. The chown issue disappears but it was replaced with file create issues in my case, changing to loose permissions on all dirs (777) resolved that however.

This will (probably) be fixed when Lima bumps up to version 1.0.

$ ~/Applications/Rancher\ Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl --version
limactl version 0.9.2-54-ge82db87

Why? Because 9P will be used instead of sshocker by default. Aka virtio-9p-pci in QEMU.

lima-vm/lima#20

https://github.com/lima-vm/lima/blob/master/docs/mount.md

Thanks, I've already changed to Docker Desktop. Will install Rancher when this issue solved.

leenooks commented 2 years ago

I too am keen to see this resolved. Having been a Docker Desktop user, and developing on my M1 mac, I have no issues with permissions when starting docker containers. All my project files are owned by my id, however the docker containers that start (eg: Postgres or mysql) while they may attempt to chown and change permissions on files as part of their normal startup process - those attempts don't fail and those containers run happily.

On rancher desktop, those file mod permissions fail with permission denied errors - and if I change the uid/gid (from the host) to what the process is running on in the container, then those files are not visible and file not found errors pursue.

I changed from Docker Desktop on my company issued device, as I didn't want my company to think that I was using commercial software without meeting the license requirements, nor did I want Docker to think I was doing the same.

As much as I am glad that Rancher Desktop exists, I cannot use it as a functional replacement to Docker Desktop :(

EDIT: I just tried the 9p in the override.yml as suggested above - and it does go along way to making Rancher Desktop a functional replacement to Docker Desktop. I had an issue with permissions, which I had to fix "in container" but once done, things were working better. :D

guild-jonathan-kaczynski commented 1 year ago

I see this discussion over on the lima project page https://github.com/lima-vm/lima/issues/971 (v1.0 roadmap: change the default mount driver from reverse-sshfs to 9p)

ryancurrah commented 1 year ago

Looks like 6 days ago Lima documented using virtiofs https://github.com/lima-vm/lima/commit/c18ae239b69a47db77436765b9b4861aaa0d595d

jandubois commented 1 year ago

Looks like 6 days ago Lima documented using virtiofs lima-vm/lima@c18ae23

It is still unreleased. Also will require macOS 13 (Ventura).

yllekz commented 1 year ago

Also having this problem but it's across more than postgres/mongo - random containers of mine are having chown-related issues like this. I attempted the 9p workaround but that made the problem even worse (all of my containers started chaotically crashing/restarting)

jsoref commented 1 year ago

You can include the following content in ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml:

mountType: 9p
mounts:
  - location: "~"
    9p:
      securityModel: mapped-xattr
      cache: "mmap"

It should allow this to work (you must restart Rancher Desktop to apply this setting).

Caveats: any symlinks on your host system will be seen as the referenced object in the VM/container. If there's a symlink loop, and something tries to follow it, it'll eat its own tail (potentially slowly depending on how things behave).

The databases I'm playing w/ (postgres, redis, neo4j) don't generally deal in symlinks, so I believe it's a satisfactory configuration for my database use cases. (It may create a mess for all of my other use cases, but that remains to be seen.)

QuentinStouffs commented 1 year ago

You can include the following content in ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml:

mountType: 9p
mounts:
  - location: "~"
    9p:
      securityModel: mapped-xattr
      cache: "mmap"

It should allow this to work (you must restart Rancher Desktop to apply this setting).

Caveats: any symlinks on your host system will be seen as the referenced object in the VM/container. If there's a symlink loop, and something tries to follow it, it'll eat its own tail (potentially slowly depending on how things behave).

The databases I'm playing w/ (postgres, redis, neo4j) don't generally deal in symlinks, so I believe it's a satisfactory configuration for my database use cases. (It may create a mess for all of my other use cases, but that remains to be seen.)

Thanks a lot, this solved my issue!

Jay-Madden commented 1 year ago

This ended up being a problem with colima on my m1 mac for me and ultimately with the lima vm, I had to upgrade to ventura and switch colima to vz vmtype and set my mountype to virtiofs then uninstall colima and lima and reinstall them and the mount worked

steps I took

  1. run colima template and set vmType and mountType
    
    # Virtual Machine type (qemu, vz)
    # NOTE: this is macOS 13 only. For Linux and macOS <13.0, qemu is always used.
    #
    # vz is macOS virtualization framework and requires macOS 13
    #
    # Default: qemu
    vmType: vz

Utilise rosetta for amd64 emulation (requires m1 mac and vmType vz)

Default: false

rosetta: false

Volume mount driver for the virtual machine (virtiofs, 9p, sshfs).

#

virtiofs is limited to macOS and vmType vz. It is the fastest of the options.

#

9p is the recommended and the most stable option for vmType qemu.

#

sshfs is faster than 9p but the least reliable of the options (when there are lots

of concurrent reads or writes).

#

Default: virtiofs (for vz), sshfs (for qemu)

mountType: virtiofs

2. uninstall and reinstall colima `brew uninstall colima && brew uninstall lima`
3. reinstall colima and start it up

brew install colima colima start

roy-t commented 1 year ago

I encountered a similar issue where on Windows hosts the non-root user inside the container no longer had write access to mounts after switching from docker-desktop to rancher-desktop.

I'm using a docker-compose file with the following volume

volumes:
  - ${ROOT_DIRECTORY}/logs:/opt/something/logs

If the directory ${ROOT_DIRECTORY}/logs does not exist on the Windows Host it gets created when I run docker-compose up for the first time. The root user inside the container will be the owner of the /opt/something/logs directory and the non-root user inside the container will only have read access.

If the directory already existed on the Windows hosts (it is not created by Rancher-Desktop), the non root user inside the container also has write access.

I guess there's some difference in how Rancher-Desktop creates the folder on the host as compared to how Docker-Desktop did it on Windows hosts.

jbilliau-rcd commented 1 year ago

I'm trying to use Rancher Desktop with the simplest compose file ever and get a similar error:

version: '3'
services:
  redis:
    container_name: rmo-redis
    image: redis:6
    ports:
      - 6379:6379
    command: ['redis-server']
    volumes:
      - ./:/tmp

Running "docker compose up" gives me:

➜  ~/git/scratch/composetest/web git:(main) ✗ docker compose up
[+] Running 1/0
 ⠿ Container rmo-redis  Recreated                                                                                                                                                                      0.0s
Attaching to rmo-redis
rmo-redis  | chown: changing ownership of '.': Permission denied
rmo-redis exited with code 1

It seems like the only solution is to use the mountType: 9p solution described by @QuentinStouffs above, but according to this link - https://github.com/lima-vm/lima/issues/20#issue-895105285 (and our own devs), it's ridicoulously slow to the point of unusable.

spawnrider commented 1 year ago

Hi, Same issue for me on Jaeger/Prometheus with Rancher Desktop & Docker compose on Mac M1 :


[+] Running 2/0
 ⠿ Container jaeger-tracing-1     Created                                                                  0.0s
 ⠿ Container jaeger-prometheus-1  Created                                                                  0.0s
Attaching to jaeger-prometheus-1, jaeger-tracing-1
Error response from daemon: error while creating mount source path '/Users/xxx/Documents/Dev/Playground/opentelemetry/jaeger/prometheus.yml': chown /Users/xxx/Documents/Dev/Playground/opentelemetry/jaeger/prometheus.yml: permission denied```
yangtze64 commented 1 year ago

How to solve this problem. My Rancher Desktop version is 1.9.0-tech-preview still have a similar problem.

jsedano-emobg commented 1 year ago

Any chance that this problem https://github.com/rancher-sandbox/rancher-desktop/issues/2514 is related to this one?

jsoref commented 1 year ago

@jsedano-emobg: sure?

Rancher Desktop (on macOS) uses limavm and the way that host files are mapped to the guest in limavm is not the same as the way that Docker Desktop (on macOS) does things. You can change how limavm does it (e.g. via 9p -- https://github.com/rancher-sandbox/rancher-desktop/issues/1209#issuecomment-1370181132), but it isn't a 100% feature for feature quirk for quirk implementation.

jsedano-emobg commented 1 year ago

No no, I am not sure at all that it is the same, I was just asking because I couldn't fully understand this issue.

On issue #2514 , it behaves differently in Rancher Desktop and Docker Desktop, and given that we RD showed up later, the best way to gain traction is to behave like DD.

jsoref commented 1 year ago

The reason you're experiencing that issue is the same reason people are experiencing this issue, and a proper docker-desktop compatible fix for this issue should fix that issue.

And yes, the RD folks understand that quick-for-quirk compatibility is helpful to gaining adopters.

aizenrivki commented 1 year ago

I'm trying to use Rancher Desktop with the simplest compose file ever and get a similar error:

version: '3'
services:
  redis:
    container_name: rmo-redis
    image: redis:6
    ports:
      - 6379:6379
    command: ['redis-server']
    volumes:
      - ./:/tmp

Running "docker compose up" gives me:

➜  ~/git/scratch/composetest/web git:(main) ✗ docker compose up
[+] Running 1/0
 ⠿ Container rmo-redis  Recreated                                                                                                                                                                      0.0s
Attaching to rmo-redis
rmo-redis  | chown: changing ownership of '.': Permission denied
rmo-redis exited with code 1

It seems like the only solution is to use the mountType: 9p solution described by @QuentinStouffs above, but according to this link - lima-vm/lima#20 (comment) (and our own devs), it's ridicoulously slow to the point of unusable.

In my case, adding "user: redis" solved the problem.

dro-ex commented 1 year ago

Complete noob here - but from what i am understanding this issue is preventing bind mounts from mounting when using VZ and we awaiting a fix from the LIMA team - is that correct?

sourcecodemage commented 1 year ago

I have rancher desktop but colima wasn't installed, which makes me wonder if this solution is for me. System: M1 Macbook running Ventura 13.5.

sourcecodemage commented 1 year ago

Good call. I tried it and I got "failed: Too many levels of symbolic links (40)" , so it won't work for my use case. I'll try the colima method.

jsoref commented 1 year ago

fwiw, vz is available in 1.9.1:

sourcecodemage commented 1 year ago

fwiw, vz is available in 1.9.1:

I found and enabled those settings late yesterday. RD said it needed to restart afterwards, so I stopped and started it.

12+ hours later, it still says "starting". I'll try rebooting my workstation and see how things go.

jsoref commented 1 year ago

I tripped on something like that, I can't remember what my problem was, visit Slack (see https://rancherdesktop.io/) and ask for help.

fivestones commented 11 months ago

I'm also having the same problem as the OP, running Postgres in docker-compose with a bind mount running on Apple silicon M2. I get that same error. I'm running RD 1.10.0.

It looks like there's at least one workaround but that it might cause a bunch of trouble with other containers using simlinks.

I'm happy to provide more information if it can help someone debug this. I unfortunately don't know where to start to debug/fix it myself.

santoshborse commented 11 months ago

This solved my issue, thanks for posting

You can include the following content in ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml:

mountType: 9p
mounts:
  - location: "~"
    9p:
      securityModel: mapped-xattr
      cache: "mmap"

It should allow this to work (you must restart Rancher Desktop to apply this setting).

Caveats: any symlinks on your host system will be seen as the referenced object in the VM/container. If there's a symlink loop, and something tries to follow it, it'll eat its own tail (potentially slowly depending on how things behave).

The databases I'm playing w/ (postgres, redis, neo4j) don't generally deal in symlinks, so I believe it's a satisfactory configuration for my database use cases. (It may create a mess for all of my other use cases, but that remains to be seen.)

chrisdaly3 commented 7 months ago

You can include the following content in ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml:

mountType: 9p
mounts:
  - location: "~"
    9p:
      securityModel: mapped-xattr
      cache: "mmap"

It should allow this to work (you must restart Rancher Desktop to apply this setting).

Caveats: any symlinks on your host system will be seen as the referenced object in the VM/container. If there's a symlink loop, and something tries to follow it, it'll eat its own tail (potentially slowly depending on how things behave).

The databases I'm playing w/ (postgres, redis, neo4j) don't generally deal in symlinks, so I believe it's a satisfactory configuration for my database use cases. (It may create a mess for all of my other use cases, but that remains to be seen.)

Just chiming in in 2024, M1 Mac running docker-compose with a Mongo container, confirming this has resolved the build issue for now. Still to be determined whether or not any weird "side effects" pop up. Hopefully a native fix gets rolled soon.

gigi888 commented 7 months ago

this works for me, https://stackoverflow.com/a/77803515/1183542
what was confusing at the beginning to me is I didn't install lima explicitly. It is included in RD

CoderChang65535 commented 6 months ago

After my Mac upgrade to 14.3.1, former projects failed with this error: chown: changing ownership of '/var/lib/mysql/xxx': Permission denied I set MySQL files as volume ` volumes:

muhramadhan commented 6 months ago

current workaround for me is to set VZ as emulation in preference. so far no downside for my usecase

galusben commented 5 months ago

I was able to resolve the issue on mac with the following setup:

image (10) image (9)

I also allow Rancher Desktop to use admin permission and disabled Traefik.

llaszkie commented 5 months ago

current workaround for me is to set VZ as emulation in preference. so far no downside for my usecase

... and "Volumes/Mount Type" to virtiofs at the same time. Disclaimer: I am also in testing phase for the setting :-)

nothing2obvi commented 2 months ago

VZ and virtiofs also fixed it for me.

fonsitoubi commented 1 month ago

Same behavior on Mac M1. With latest Rancher Desktop versions, after Ventura update it's not possible to use VZ (only QEMU), bc of the well known:

Error: /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura exited with code 1

'time="2024-07-12T13:57:42+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:45+02:00" level=info msg="[hostagent] 2024/07/12 13:57:45 tcpproxy: for incoming conn 127.0.0.1:64800, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:55+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:58+02:00" level=info msg="[hostagent] 2024/07/12 13:57:58 tcpproxy: for incoming conn 127.0.0.1:64861, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:58:08+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:58:11+02:00" level=info msg="[hostagent] 2024/07/12 13:58:11 tcpproxy: for incoming conn 127.0.0.1:64913, error'... 5805 more characters, code: 1, [Symbol(child-process.command)]: '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura start --tty=false 0' }


If going back to RD 1.11.1 this issue with chowning still occurs, and no VZ can be use either as it gets stuck starting vm with the progress bar loading infinitely.

`2024-07-12T11:52:05.190Z: > /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura list --json {"name":"0","status":"Stopped","dir":"/Users/fonsito/Library/Application Support/rancher-desktop/lima/0","vmType":"vz","arch":"aarch64","cpuType":"","cpus":2,"memory":4294967296,"disk":107374182400,"network":[{"lima":"rancher-desktop-shared","macAddress":"52:55:55:1a:dd:d4","interface":"rd1"},{"lima":"rancher-desktop-bridged_en0","macAddress":"52:55:55:89:cb:f0","interface":"rd0"}],"sshLocalPort":53709,"sshConfigFile":"/Users/fonsito/Library/Application Support/rancher-desktop/lima/0/ssh.config","config":{"vmType":"vz","os":"Linux","arch":"aarch64","images":[{"location":"/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/alpine-lima-v0.2.31.rd10-rd-3.18.0.iso","arch":"aarch64"}],"cpus":2,"memory":"4294967296","disk":"100GiB","mounts":[{"location":"~","mountPoint":"~","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}},{"location":"/tmp/rancher-desktop","mountPoint":"/tmp/rancher-desktop","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}},{"location":"/Volumes","mountPoint":"/Volumes","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}},{"location":"/var/folders","mountPoint":"/var/folders","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}},{"location":"/Applications/Rancher Desktop.app/Contents/Resources/resources","mountPoint":"/Applications/Rancher Desktop.app/Contents/Resources/resources","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}}],"mountType":"virtiofs","ssh":{"localPort":53709,"loadDotSSHPubKeys":false,"forwardAgent":false,"forwardX11":false,"forwardX11Trusted":false},"firmware":{"legacyBIOS":false},"audio":{"device":""},"video":{"display":"none","vnc":{}},"provision":[{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nmkdir -p /bootfs\nmount --bind / /bootfs\n# /bootfs/etc is empty on first boot because it has been moved to /mnt/data/etc by lima\n\n# Workaround for https://github.com/rancher-sandbox/rancher-desktop/issues/6051\n# should be removed when the issue is fixed in Lima itself\nif [ -f /bootfs/etc/network/interfaces ] && ! diff -q /etc/network/interfaces /bootfs/etc/network/interfaces; then\n cp /bootfs/etc/network/interfaces /etc/network/interfaces\n rc-service networking restart\nfi\nif [ -f /bootfs/etc/os-release ] && ! diff -q /etc/os-release /bootfs/etc/os-release; then\n cp /etc/machine-id /bootfs/etc\n cp /etc/ssh/ssh_host /bootfs/etc/ssh/\n mkdir -p /etc/docker /etc/rancher\n cp -pr /etc/docker /bootfs/etc\n cp -pr /etc/rancher /bootfs/etc\n\n rm -rf /mnt/data/etc.prev\n mkdir /mnt/data/etc.prev\n mv /etc/ /mnt/data/etc.prev\n mv /bootfs/etc/* /etc\n\n # install updated files from /usr/local, e.g. nerdctl, buildkit, cni plugins\n cp -pr /bootfs/usr/local /usr\n\n # lima has applied changes while the \"old\" /etc was in place; restart to apply them to the updated one.\n reboot\nfi\numount /bootfs\nrmdir /bootfs\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nfstrim /mnt/data\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nsed -i -E 's/^#?MaxSessions +[0-9]+/MaxSessions 25/g' /etc/ssh/sshd_config\nrc-service --ifstarted sshd reload\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nif ! [ -d /mnt/data/root ]; then\n mkdir -p /root\n mv /root /mnt/data/root\nfi\nmkdir -p /root\nmount --bind /mnt/data/root /root\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nmkdir -p /etc/docker\n\n# Delete certs.d if it is a symlink (from previous boot).\n[ -L /etc/docker/certs.d ] && rm /etc/docker/certs.d\n\n# Create symlink if certs.d doesn't exist (user may have created a regular directory).\nif [ ! -e /etc/docker/certs.d ]; then\n # We don't know if the host is Linux or macOS, so we take a guess based on which mountpoint exists.\n if [ -d \"/Users/${LIMA_CIDATA_USER}\" ]; then\n ln -s \"/Users/${LIMA_CIDATA_USER}/.docker/certs.d\" /etc/docker\n elif [ -d \"/home/${LIMA_CIDATA_USER}\" ]; then\n ln -s \"/home/${LIMA_CIDATA_USER}/.docker/certs.d\" /etc/docker\n fi\nfi\n"},{"mode":"system","script":"#!/bin/sh\nhostname lima-rancher-desktop\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\n# During boot is the only safe time to delete old k3s versions.\nrm -rf /var/lib/rancher/k3s/data\n# Delete all tmp files older than 3 days.\nfind /tmp -depth -mtime +3 -delete\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nfor dir in / /etc /tmp /var/lib; do\n mount --make-shared \"${dir}\"\ndone\n"},{"mode":"system","script":"#!/bin/sh\n# Move logrotate to hourly, because busybox crond only handles time jumps up\n# to one hour; this ensures that if the machine is suspended over long\n# periods, things will still happen often enough. This is idempotent.\nmv -n /etc/periodic/daily/logrotate /etc/periodic/hourly/\nrc-update add crond default\nrc-service crond start\n"},{"mode":"system","script":"set -o errexit -o nounset -o xtrace\nusermod --append --groups docker \"${LIMA_CIDATA_USER}\"\n"},{"mode":"system","script":"export CAROOT=/run/mkcert\nmkdir -p $CAROOT\ncd $CAROOT\nmkcert -install\nmkcert localhost\nchown -R nobody:nobody $CAROOT\n"},{"mode":"system","script":"set -o errexit -o nounset -o xtrace\n\n# openresty is backgrounding itself (and writes its own pid file)\nsed -i 's/^command_background/#command_background/' /etc/init.d/openresty\n\n# configure proxy only when allowed-images exists\naiListConf=/usr/local/openresty/nginx/conf/allowed-images.conf\n# Remove the reference to an obsolete image conf filename\noldIAListConf=/usr/local/openresty/nginx/conf/image-allow-list.conf\nsetproxy=\"[ -f $aiListConf ] && supervise_daemon_args=\\"-e HTTPS_PROXY=http://127.0.0.1:3128 \$supervise_daemon_args\\"\"\nfor svc in containerd docker; do\n sed -i \"\#-f $aiListConf#d\" /etc/init.d/$svc\n sed -i \"\#-f $oldIAListConf#d\" /etc/init.d/$svc\n sed -i \"/^supervise_daemon_args/a $setproxy\" /etc/init.d/$svc\ndone\n\n# Make sure openresty log directory exists\ninstall -d -m755 /var/log/openresty\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit\n\nmount bpffs -t bpf /sys/fs/bpf\nmount --make-shared /sys/fs/bpf\nmount --make-shared /sys/fs/cgroup\n"}],"containerd":{"system":false,"user":false,"archives":[{"location":"https://github.com/containerd/nerdctl/releases/download/v1.6.2/nerdctl-full-1.6.2-linux-amd64.tar.gz","arch":"x86_64","digest":"sha256:37678f27ad341a7c568c5064f62bcbe90cddec56e65f5d684edf8ca955c3e6a4"},{"location":"https://github.com/containerd/nerdctl/releases/download/v1.6.2/nerdctl-full-1.6.2-linux-arm64.tar.gz","arch":"aarch64","digest":"sha256:ea30ab544c057e3a0457194ecd273ffbce58067de534bdfaffe4edf3a4da6357"}]},"guestInstallPrefix":"/usr/local","portForwards":[{"guestIPMustBeZero":true,"guestIP":"0.0.0.0","guestPortRange":[1,65535],"hostIP":"0.0.0.0","hostPortRange":[1,65535],"proto":"tcp"},{"guestIP":"127.0.0.1","guestPortRange":[1,65535],"guestSocket":"/var/run/docker.sock","hostIP":"127.0.0.1","hostPortRange":[1,65535],"hostSocket":"/Users/fonsito/.rd/docker.sock","proto":"tcp"}],"networks":[{"lima":"rancher-desktop-shared","macAddress":"52:55:55:1a:dd:d4","interface":"rd1"},{"lima":"rancher-desktop-bridged_en0","macAddress":"52:55:55:89:cb:f0","interface":"rd0"}],"hostResolver":{"enabled":true,"ipv6":false,"hosts":{"host.docker.internal":"host.lima.internal","host.rancher-desktop.internal":"host.lima.internal","lima-rancher-desktop":"lima-0"}},"propagateProxyEnv":true,"caCerts":{"removeDefaults":false},"rosetta":{"enabled":false,"binfmt":false},"plain":false},"sshAddress":"127.0.0.1","protected":false,"HostOS":"darwin","HostArch":"aarch64","LimaHome":"/Users/fonsito/Library/Application Support/rancher-desktop/lima","IdentityFile":"/Users/fonsito/Library/Application Support/rancher-desktop/lima/_config/user"}

2024-07-12T12:02:05.464Z: > limactl start --tty=false 0 $ c [Error]: /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura exited with code 1 at ChildProcess. (/Applications/Rancher Desktop.app/Contents/Resources/app.asar/dist/app/background.js:2:138016) at ChildProcess.emit (node:events:527:28) at ChildProcess._handle.onexit (node:internal/child_process:291:12) { command: [ '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura', 'start', '--tty=false', '0' ], stdout: '', stderr: 'time="2024-07-12T13:52:05+02:00" level=info msg="Using the existing instance \"0\""\n' + 'time="2024-07-12T13:52:05+02:00" level=info msg="Starting socket_vmnet daemon for \"rancher-desktop-shared\" network"\n' + 'time="2024-07-12T13:52:05+02:00" level=info msg="Starting socket_vmnet daemon for \"rancher-desktop-bridged_en0\" network"\n' + 'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] hostagent socket created at /Users/fonsito/Library/Application Support/rancher-desktop/lima/0/ha.sock"\n' + 'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] Starting VZ (hint: to watch the boot progress, see \"/Users/fonsito/Library/Application Support/rancher-desktop/lima/0/serial*.log\")"\n' + 'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] new connection from to "\n' + 'time="2024-07-12T13:52:07+02:00" level=info msg="SSH Local Port: 53709"\n' + 'time="2024-07-12T13:52:07+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:52:07+02:00" level=info msg="[hostagent] [VZ] - vm state change: running"\n' + 'time="2024-07-12T13:52:17+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:52:20+02:00" level=info msg="[hostagent] 2024/07/12 13:52:20 tcpproxy: for incoming conn 127.0.0.1:63284, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:52:27+02:00" level=error msg="[hostagent] dhcp: unhandled message type: RELEASE"\n' + 'time="2024-07-12T13:52:30+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:52:31+02:00" level=info msg="[hostagent] 2024/07/12 13:52:31 tcpproxy: for incoming conn 127.0.0.1:63338, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: connection was refused"\n' + 'time="2024-07-12T13:52:41+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:52:51+02:00" level=info msg="[hostagent] 2024/07/12 13:52:51 tcpproxy: for incoming conn 127.0.0.1:63387, error dialing \"192.168.5.15:22\": context deadline exceeded"\n' + 'time="2024-07-12T13:53:01+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:53:11+02:00" level=info msg="[hostagent] 2024/07/12 13:53:11 tcpproxy: for incoming conn 127.0.0.1:63475, error dialing \"192.168.5.15:22\": context deadline exceeded"\n' + 'time="2024-07-12T13:53:21+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:53:24+02:00" level=info msg="[hostagent] 2024/07/12 13:53:24 tcpproxy: for incoming conn 127.0.0.1:63570, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:53:34+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:53:37+02:00" level=info msg="[hostagent] 2024/07/12 13:53:37 tcpproxy: for incoming conn 127.0.0.1:63642, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:53:47+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:53:50+02:00" level=info msg="[hostagent] 2024/07/12 13:53:50 tcpproxy: for incoming conn 127.0.0.1:63704, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:00+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:03+02:00" level=info msg="[hostagent] 2024/07/12 13:54:03 tcpproxy: for incoming conn 127.0.0.1:63755, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:13+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:16+02:00" level=info msg="[hostagent] 2024/07/12 13:54:16 tcpproxy: for incoming conn 127.0.0.1:63809, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:26+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:29+02:00" level=info msg="[hostagent] 2024/07/12 13:54:29 tcpproxy: for incoming conn 127.0.0.1:63873, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:29+02:00" level=info msg="[hostagent] 2024/07/12 13:54:29 tcpproxy: for incoming conn 127.0.0.1:63866, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:39+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:42+02:00" level=info msg="[hostagent] 2024/07/12 13:54:42 tcpproxy: for incoming conn 127.0.0.1:63922, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:52+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:55+02:00" level=info msg="[hostagent] 2024/07/12 13:54:55 tcpproxy: for incoming conn 127.0.0.1:63986, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:05+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:55:08+02:00" level=info msg="[hostagent] 2024/07/12 13:55:08 tcpproxy: for incoming conn 127.0.0.1:64046, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:18+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:55:21+02:00" level=info msg="[hostagent] 2024/07/12 13:55:21 tcpproxy: for incoming conn 127.0.0.1:64109, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:31+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:55:34+02:00" level=info msg="[hostagent] 2024/07/12 13:55:34 tcpproxy: for incoming conn 127.0.0.1:64182, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:45+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:55:48+02:00" level=info msg="[hostagent] 2024/07/12 13:55:48 tcpproxy: for incoming conn 127.0.0.1:64232, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:58+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:01+02:00" level=info msg="[hostagent] 2024/07/12 13:56:01 tcpproxy: for incoming conn 127.0.0.1:64293, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:56:11+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:14+02:00" level=info msg="[hostagent] 2024/07/12 13:56:14 tcpproxy: for incoming conn 127.0.0.1:64361, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:56:24+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:27+02:00" level=info msg="[hostagent] 2024/07/12 13:56:27 tcpproxy: for incoming conn 127.0.0.1:64423, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:56:37+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:40+02:00" level=info msg="[hostagent] 2024/07/12 13:56:40 tcpproxy: for incoming conn 127.0.0.1:64481, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:56:50+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:53+02:00" level=info msg="[hostagent] 2024/07/12 13:56:53 tcpproxy: for incoming conn 127.0.0.1:64533, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:03+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:06+02:00" level=info msg="[hostagent] 2024/07/12 13:57:06 tcpproxy: for incoming conn 127.0.0.1:64591, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:16+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:19+02:00" level=info msg="[hostagent] 2024/07/12 13:57:19 tcpproxy: for incoming conn 127.0.0.1:64682, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:29+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:32+02:00" level=info msg="[hostagent] 2024/07/12 13:57:32 tcpproxy: for incoming conn 127.0.0.1:64735, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:42+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:45+02:00" level=info msg="[hostagent] 2024/07/12 13:57:45 tcpproxy: for incoming conn 127.0.0.1:64800, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:55+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:58+02:00" level=info msg="[hostagent] 2024/07/12 13:57:58 tcpproxy: for incoming conn 127.0.0.1:64861, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:58:08+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:58:11+02:00" level=info msg="[hostagent] 2024/07/12 13:58:11 tcpproxy: for incoming conn 127.0.0.1:64913, error'... 5805 more characters, code: 1, [Symbol(child-process.command)]: '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura start --tty=false 0' } 2024-07-12T12:02:05.489Z: Error starting lima: c [Error]: /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura exited with code 1 at ChildProcess. (/Applications/Rancher Desktop.app/Contents/Resources/app.asar/dist/app/background.js:2:138016) at ChildProcess.emit (node:events:527:28) at ChildProcess._handle.onexit (node:internal/child_process:291:12) { command: [ '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura', 'start', '--tty=false', '0' ], stdout: '', stderr: 'time="2024-07-12T13:52:05+02:00" level=info msg="Using the existing instance \"0\""\n' + 'time="2024-07-12T13:52:05+02:00" level=info msg="Starting socket_vmnet daemon for \"rancher-desktop-shared\" network"\n' + 'time="2024-07-12T13:52:05+02:00" level=info msg="Starting socket_vmnet daemon for \"rancher-desktop-bridged_en0\" network"\n' + 'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] hostagent socket created at /Users/fonsito/Library/Application Support/rancher-desktop/lima/0/ha.sock"\n' + 'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] Starting VZ (hint: to watch the boot progress, see \"/Users/fonsito/Library/Application Support/rancher-desktop/lima/0/serial*.log\")"\n' + 'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] new connection from to "\n' + 'time="2024-07-12T13:52:07+02:00" level=info msg="SSH Local Port: 53709"\n' + 'time="2024-07-12T13:52:07+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:52:07+02:00" level=info msg="[hostagent] [VZ] - vm state change: running"\n' + 'time="2024-07-12T13:52:17+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:52:20+02:00" level=info msg="[hostagent] 2024/07/12 13:52:20 tcpproxy: for incoming conn 127.0.0.1:63284, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:52:27+02:00" level=error msg="[hostagent] dhcp: unhandled message type: RELEASE"\n' + 'time="2024-07-12T13:52:30+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:52:31+02:00" level=info msg="[hostagent] 2024/07/12 13:52:31 tcpproxy: for incoming conn 127.0.0.1:63338, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: connection was refused"\n' + 'time="2024-07-12T13:52:41+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:52:51+02:00" level=info msg="[hostagent] 2024/07/12 13:52:51 tcpproxy: for incoming conn 127.0.0.1:63387, error dialing \"192.168.5.15:22\": context deadline exceeded"\n' + 'time="2024-07-12T13:53:01+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:53:11+02:00" level=info msg="[hostagent] 2024/07/12 13:53:11 tcpproxy: for incoming conn 127.0.0.1:63475, error dialing \"192.168.5.15:22\": context deadline exceeded"\n' + 'time="2024-07-12T13:53:21+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:53:24+02:00" level=info msg="[hostagent] 2024/07/12 13:53:24 tcpproxy: for incoming conn 127.0.0.1:63570, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:53:34+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:53:37+02:00" level=info msg="[hostagent] 2024/07/12 13:53:37 tcpproxy: for incoming conn 127.0.0.1:63642, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:53:47+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:53:50+02:00" level=info msg="[hostagent] 2024/07/12 13:53:50 tcpproxy: for incoming conn 127.0.0.1:63704, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:00+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:03+02:00" level=info msg="[hostagent] 2024/07/12 13:54:03 tcpproxy: for incoming conn 127.0.0.1:63755, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:13+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:16+02:00" level=info msg="[hostagent] 2024/07/12 13:54:16 tcpproxy: for incoming conn 127.0.0.1:63809, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:26+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:29+02:00" level=info msg="[hostagent] 2024/07/12 13:54:29 tcpproxy: for incoming conn 127.0.0.1:63873, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:29+02:00" level=info msg="[hostagent] 2024/07/12 13:54:29 tcpproxy: for incoming conn 127.0.0.1:63866, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:39+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:42+02:00" level=info msg="[hostagent] 2024/07/12 13:54:42 tcpproxy: for incoming conn 127.0.0.1:63922, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:54:52+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:54:55+02:00" level=info msg="[hostagent] 2024/07/12 13:54:55 tcpproxy: for incoming conn 127.0.0.1:63986, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:05+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:55:08+02:00" level=info msg="[hostagent] 2024/07/12 13:55:08 tcpproxy: for incoming conn 127.0.0.1:64046, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:18+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:55:21+02:00" level=info msg="[hostagent] 2024/07/12 13:55:21 tcpproxy: for incoming conn 127.0.0.1:64109, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:31+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:55:34+02:00" level=info msg="[hostagent] 2024/07/12 13:55:34 tcpproxy: for incoming conn 127.0.0.1:64182, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:45+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:55:48+02:00" level=info msg="[hostagent] 2024/07/12 13:55:48 tcpproxy: for incoming conn 127.0.0.1:64232, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:55:58+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:01+02:00" level=info msg="[hostagent] 2024/07/12 13:56:01 tcpproxy: for incoming conn 127.0.0.1:64293, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:56:11+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:14+02:00" level=info msg="[hostagent] 2024/07/12 13:56:14 tcpproxy: for incoming conn 127.0.0.1:64361, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:56:24+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:27+02:00" level=info msg="[hostagent] 2024/07/12 13:56:27 tcpproxy: for incoming conn 127.0.0.1:64423, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:56:37+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:40+02:00" level=info msg="[hostagent] 2024/07/12 13:56:40 tcpproxy: for incoming conn 127.0.0.1:64481, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:56:50+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:56:53+02:00" level=info msg="[hostagent] 2024/07/12 13:56:53 tcpproxy: for incoming conn 127.0.0.1:64533, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:03+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:06+02:00" level=info msg="[hostagent] 2024/07/12 13:57:06 tcpproxy: for incoming conn 127.0.0.1:64591, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:16+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:19+02:00" level=info msg="[hostagent] 2024/07/12 13:57:19 tcpproxy: for incoming conn 127.0.0.1:64682, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:29+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:32+02:00" level=info msg="[hostagent] 2024/07/12 13:57:32 tcpproxy: for incoming conn 127.0.0.1:64735, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:42+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:45+02:00" level=info msg="[hostagent] 2024/07/12 13:57:45 tcpproxy: for incoming conn 127.0.0.1:64800, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:57:55+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:57:58+02:00" level=info msg="[hostagent] 2024/07/12 13:57:58 tcpproxy: for incoming conn 127.0.0.1:64861, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' + 'time="2024-07-12T13:58:08+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' + 'time="2024-07-12T13:58:11+02:00" level=info msg="[hostagent] 2024/07/12 13:58:11 tcpproxy: for incoming conn 127.0.0.1:64913, error'... 5805 more characters, code: 1, [Symbol(child-process.command)]: '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura start --tty=false 0' } `