lima-vm / lima

Linux virtual machines, with a focus on running containers
https://lima-vm.io/
Apache License 2.0
15.31k stars 602 forks source link

Lima VM will randomly lose network connectivity #2480

Closed wduncanfraser closed 1 month ago

wduncanfraser commented 4 months ago

Description

Lima VM will randomly lose host connectivity, usually this seems to correlate to minor memory pressure/contention on the host system.

Logs will show:

{"error":"close unix -\u003e/tmp/lima-psl-127.0.0.1-80-3927272561/sock: shutdown: socket is not connected","level":"debug","msg":"failed to call CloseRead","time":"2024-07-10T19:56:22-05:00"}
{"error":"close tcp4 127.0.0.1:80-\u003e127.0.0.1:58754: shutdown: socket is not connected","level":"debug","msg":"failed to call CloseRead","time":"2024-07-10T19:56:37-05:00"}
{"level":"error","msg":"write unixgram -\u003e: write: no buffer space available","time":"2024-07-10T19:57:49-05:00"}
{"level":"error","msg":"cannot receive packets from , disconnecting: cannot read size from socket: read unixgram -\u003e: use of closed network connection","time":"2024-07-10T19:57:49-05:00"}
{"level":"error","msg":"FD connection closed with errorcannot read size from socket: read unixgram -\u003e: use of closed network connection","time":"2024-07-10T19:57:49-05:00"}

It appears like the lima agent is crashing or disconnecting with no recovery? When this happens, I can no longer shell into the lima vm. We're seeing this across many machines, on both intel and arm hosts.

Lima template:

images:
# Try to use release-yyyyMMdd image if available. Note that release-yyyyMMdd will be removed after several months.
- location: "https://cloud-images.ubuntu.com/releases/24.04/release-20240702/ubuntu-24.04-server-cloudimg-amd64.img"
  arch: "x86_64"
  digest: "sha256:182dc760bfca26c45fb4e4668049ecd4d0ecdd6171b3bae81d0135e8f1e9d93e"
- location: "https://cloud-images.ubuntu.com/releases/24.04/release-20240702/ubuntu-24.04-server-cloudimg-arm64.img"
  arch: "aarch64"
  digest: "sha256:5fe06e10a3b53cfff06edcb8595552b1f0372265b69fa424aa464eb4bcba3b09"
# Fallback to the latest release image.
# Hint: run `limactl prune` to invalidate the cache
- location: "https://cloud-images.ubuntu.com/releases/24.04/release/ubuntu-24.04-server-cloudimg-amd64.img"
  arch: "x86_64"
- location: "https://cloud-images.ubuntu.com/releases/24.04/release/ubuntu-24.04-server-cloudimg-arm64.img"
  arch: "aarch64"

vmType: vz
rosetta:
  enabled: true
  binfmt: true

networks:
- vzNAT: true

# Mounts are disabled in this template, but can be enabled optionally.
mounts: []
mountType: virtiofs

# containerd is managed by k3s, not by Lima, so the values are set to false here.
containerd:
  system: false
  user: false

env:
  INSTALL_K3S_VERSION: v1.29.6+k3s1
  EXTRA_K3S_FLAGS:

provision:
- mode: system
  script: |
    #!/usr/bin/env bash

    if [ ! -d /var/lib/rancher/k3s ]; then
        mkdir -p /etc/rancher/k3s
        tee /etc/rancher/k3s/registries.yaml <<EOF
    mirrors:
      host.lima.internal:
        endpoint:
        - "http://host.lima.internal:5000"
    EOF

        curl -sfL https://get.k3s.io -o install-k3s.sh

        bash ./install-k3s.sh \
          --write-kubeconfig-mode 644 \
          --disable=traefik \
          ${EXTRA_K3S_FLAGS}
    fi
probes:
- script: |
    #!/bin/bash
    set -eux -o pipefail
    if ! timeout 30s bash -c "until test -f /etc/rancher/k3s/k3s.yaml; do sleep 3; done"; then
        echo >&2 "k3s is not running yet"
        exit 1
    fi
  hint: |
    The k3s kubeconfig file has not yet been created.
    Run "limactl shell ngdev sudo journalctl -u k3s" to check the log.
    If that is still empty, check the bottom of the log at "/var/log/cloud-init-output.log".
balajiv113 commented 4 months ago

I think it's related to

https://github.com/containers/gvisor-tap-vsock/issues/367

wduncanfraser commented 4 months ago

I think it's related to

containers/gvisor-tap-vsock#367

I think that makes sense, the most common occurrences when we see this correspond to heavier network traffic, when the cluster is pulling images due to builds or spinning up our application stacks on the k3s cluster, etc.

Luca232000 commented 3 months ago

I have the same problem on multiple machines. Is there a workaround for this?

Ranjandas commented 3 months ago

I am observing a similar network issue when using socket_vmnet, where the VM-to-VM communication completely stops. However, I am able to shell into the VM's using limactl shell. @balajiv113 Do you think it is related to the same gvisor issue, or should I file a new issue for what I am seeing?

balajiv113 commented 3 months ago

@Ranjandas - No, it is not related to Gvisor. Socket_vmnet is on different stack.

Please raise a issue with reproduction steps.

balajiv113 commented 1 month ago

Marking this issue as closed. As this is already addressed in gvisor-tap-vsock and we lima also using latest version