canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.32k stars 926 forks source link

macvlan NICs losing connectivity when LXD is reloaded #11089

Closed markrattray closed 1 year ago

markrattray commented 1 year ago

Required information

Issue description

All instances are using MACVLAN interfaces for now till I get everything over to LXD.

All VMs are dropping off the network. When trying to restart them they just go into a stopped state. When trying to start them we then get the error message:

~$ lxc start vmname
Error: Failed to start device "eth0": Failed to set the MAC address: Failed to run: ip link set dev macd8b62eeb address 00:16:3e:87:19:1f: exit status 2 (RTNETLINK answers: Address already in use))

I notice that the dev name changes every time I try start the VM…

Failed to run: ip link set dev macd8b62eeb address 00:16:3e:87:19:1f
Failed to run: ip link set dev macef515ed2 address 00:16:3e:87:19:1f
Failed to run: ip link set dev mac99318f7d address 00:16:3e:87:19:1f
...

I can manually delete the device and start the VM:

ip link show | grep -B 1 '00:16:3e:87:19:1f'
    29: maca35b59f9@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:87:19:1f brd ff:ff:ff:ff:ff:ff

sudo ip link delete maca35b59f9
lxc start vmname

Interestingly it doesn't matter whether this is a Windows image I built or those using Ubuntu Server & Desktop VM images downloaded from the default image repo images.linuxcontainers.org.

Steps to reproduce

  1. Create VM (images:ubuntu/22.04/cloud) with MACVLAN interface
  2. Wait about a week for VMs to start loosing network connectivity
  3. Try restart them

Information to attach

last reboot due to this issue

$ last | grep reboot
reboot   system boot  5.15.0-52-generi Thu Oct 27 17:18

dmesg

root@server1:/home/someadmin# dmesg | grep windowsvm1
[  121.322993] audit: type=1400 audit(1666905560.115:61): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-windowsvm1_</var/snap/lxd/common/lxd>" pid=16353 comm="apparmor_parser"
[  121.341129] audit: type=1400 audit(1666905560.131:62): apparmor="DENIED" operation="open" profile="lxd-windowsvm1_</var/snap/lxd/common/lxd>" name="/var/lib/snapd/hostfs/run/systemd/resolve/stub-resolv.conf" pid=16413 comm="lxd" requested_mask="r" denied_mask="r" fsuid=0 ouid=101
[588092.580580] audit: type=1400 audit(1667493544.754:269): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxd-windowsvm1_</var/snap/lxd/common/lxd>" pid=2179130 comm="apparmor_parser"
[588092.600952] audit: type=1400 audit(1667493544.774:270): apparmor="DENIED" operation="open" profile="lxd-windowsvm1_</var/snap/lxd/common/lxd>" name="/var/lib/snapd/hostfs/run/systemd/resolve/stub-resolv.conf" pid=2179134 comm="lxd" requested_mask="r" denied_mask="r" fsuid=0 ouid=101

root@server1:/home/someadmin# dmesg | grep maca35b59f9

instance log: before "ip link delete {device}" The qemu.log disappears even though the instance is running, but only reappears when the instance is started successfully.

root@server1:~$ lxc info --show-log ubuntudesktopvm2
Name: ubuntudesktopvm2
Status: RUNNING
Type: virtual-machine
Architecture: x86_64
Location: server1.domain.tld
PID: 22501
Created: 2022/08/24 04:29 EDT
Last Used: 2022/10/27 17:19 EDT

Resources:
  Processes: 132
  Disk usage:
    root: 7.12GiB
  CPU usage:
    CPU usage (in seconds): 35027
  Memory usage:
    Memory (current): 3.70GiB
    Memory (peak): 3.84GiB
  Network usage:
    enp5s0:
      Type: broadcast
      State: UP
      Host interface: mac08fe77dc
      MAC address: 00:16:3e:0c:57:c6
      MTU: 1500
      Bytes received: 529.02MB
      Bytes sent: 305.63MB
      Packets received: 3759880
      Packets sent: 1370752
      IP addresses:
        inet6: fe80::216:3eff:fe0c:57c6/64 (link)
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 6.63GB
      Bytes sent: 6.63GB
      Packets received: 49331802
      Packets sent: 49331802
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)
Error: open /var/snap/lxd/common/lxd/logs/ubuntudesktopvm2/qemu.log: no such file or directory

container config #1: windowsvm1

# lxc config show windowsvm1 --expanded
architecture: x86_64
config:
  cloud-init.user-data: |
    #cloud-config
    first_logon_behaviour: false
    set_timezone: America/New_York
    users:
      - name: someadmin
        passwd: someadminpassword
        primary_group: Administrators
    winrm_enable_basic_auth: true
    winrm_configure_https_listener: true
  image.architecture: amd64
  image.description: MS WS 2022 S,D,c (20220817-2048z)
  image.os: Windows
  image.release: "2022"
  image.serial: 20220817-2048z
  image.type: virtual-machine
  image.variant: Standard, Desktop Experience, cloudbase-init
  limits.cpu: "4"
  limits.memory: 6GiB
  security.syscalls.intercept.sysinfo: "true"
  volatile.base_image: c870435e5901d2f20f3bd0418a4945fbae4f38f4327eec8bf9b5343cfa4574f6
  volatile.cloud-init.instance-id: c821ad49-9fd6-4d9a-b14d-2a1746ce46c9
  volatile.eth0.host_name: mac0a7e5062
  volatile.eth0.hwaddr: 00:16:3e:87:19:1f
  volatile.eth0.last_state.created: "false"
  volatile.last_state.power: RUNNING
  volatile.uuid: 2c78483b-2a86-4cd5-9e4c-69f47e602f28
  volatile.vsock_id: "33"
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: sp00
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

container config #2: ubuntudesktopvm1

lxc config show ubuntudesktopvm1 --expanded
architecture: x86_64
config:
  cloud-init.user-data: |
    #cloud-config
    packages:
      - apt-transport-https
      - gpg
    package_upgrade: true
    timezone: America/New_York
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20220821_07:42)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20220821_07:42"
  image.type: disk-kvm.img
  image.variant: desktop
  limits.cpu: "6"
  limits.memory: 6GiB
  security.syscalls.intercept.sysinfo: "true"
  volatile.base_image: d7c196be900f47cbcc6167031bc1521ec31a11e6b117ebebbc6234f41fe57edf
  volatile.cloud-init.instance-id: add76d39-6ffa-43b9-8331-67b172686ff7
  volatile.eth0.host_name: mac22d6f498
  volatile.eth0.hwaddr: 00:16:3e:f9:d2:d5
  volatile.eth0.last_state.created: "false"
  volatile.last_state.power: RUNNING
  volatile.uuid: 092a4884-128c-4b05-b4b5-876d322f9df9
  volatile.vsock_id: "37"
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: sp00
    size: 30GiB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Main daemon log: cat /var/snap/lxd/common/lxd/logs/lxd.log

cat /var/snap/lxd/common/lxd/logs/lxd.log
time="2022-10-31T15:34:15-04:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
time="2022-10-31T15:34:23-04:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=0579cc32-cea0-44a1-9bfb-614c4b0f7d11 project= status=Success
time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=b36749d9-3590-45e2-9370-01beb5d5560b project= status=Success
time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=2073d73d-a990-461c-b36c-025debcfb13d project= status=Success
time="2022-11-03T12:18:18-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/windowsvm1/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"

Output of the client with --debug n/a

Main daemon log: cat /var/snap/lxd/common/lxd/logs/lxd.log.1

cat /var/snap/lxd/common/lxd/logs/lxd.log.1
time="2022-10-27T17:18:45-04:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
time="2022-10-27T17:18:55-04:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=fcbca487-bcf3-47ad-94b1-4823f8082a10 project= status=Success
time="2022-10-27T17:18:55-04:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=493db1a7-1e7c-4ec5-840d-f05e24bf0aec project= status=Success
time="2022-10-27T17:18:55-04:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=9294261c-4769-429d-815d-f0b2c7e0a963 project= status=Success
time="2022-10-27T17:18:56-04:00" level=warning msg="Failed to delete operation" class=task description="Synchronizing images" err="Operation not found" operation=c9944a20-8fa9-4365-85d1-e80a6cb29943 project= status=Success
time="2022-10-27T17:19:21-04:00" level=warning msg="Starting VM without default firmware (-bios or -kernel in raw.qemu)" instance=citrixnetscaler1 instanceType=virtual-machine project=default
time="2022-10-27T17:19:23-04:00" level=warning msg="Starting VM without default firmware (-bios or -kernel in raw.qemu)" instance=citrixnetscaler2 instanceType=virtual-machine project=default
time="2022-10-27T17:20:18-04:00" level=error msg="Failed to advertise vsock address" err="Failed connecting to lxd-agent: Get \"https://custom.socket/1.0\": dial vsock vm(37):8443: connect: connection timed out" instance=ubuntudesktopvm1 instanceType=virtual-machine project=default
time="2022-10-27T17:20:19-04:00" level=warning msg="Could not get VM state from agent" err="Failed connecting to agent: Get \"https://custom.socket/1.0\": dial vsock vm(37):8443: connect: connection timed out" instance=ubuntudesktopvm1 instanceType=virtual-machine project=default
time="2022-10-27T17:20:19-04:00" level=error msg="Failed writing error for HTTP response" err="write unix /var/snap/lxd/common/lxd/unix.socket->@: write: broken pipe" url=/1.0/instances writeErr="write unix /var/snap/lxd/common/lxd/unix.socket->@: write: broken pipe"
time="2022-10-27T17:20:34-04:00" level=error msg="Failed to advertise vsock address" err="Failed connecting to lxd-agent: Get \"https://custom.socket/1.0\": dial vsock vm(39):8443: connect: connection timed out" instance=ubuntudesktopvm2 instanceType=virtual-machine project=default
time="2022-10-31T15:33:53-04:00" level=warning msg="Could not handover member's responsibilities" err="Failed to transfer leadership: No online voter found"

lxc monitor

[lxd_11089_lxcmonitor.txt](https://github.com/lxc/lxd/files/9933256/lxd_11089_lxcmonitor.txt)
tomponline commented 1 year ago

So it seems like your LXD cluster database is not in a good state, or that particular member is not in a good state.

Could not handover member's responsibilities

Please show lxc cluster list output.

tomponline commented 1 year ago

Please can you also show the output of cat /var/snap/lxd/common/lxd/logs/lxd.log. after stopping the instance.

markrattray commented 1 year ago

So it seems like your LXD cluster database is not in a good state, or that particular member is not in a good state.

Could not handover member's responsibilities

Please show lxc cluster list output.

Hi Tom.

This might explain that error and apologies that I didn't mention this earlier... of the 4 physical servers in that site, there is only that one physical server in that LXD cluster. The other two physical servers that are destined to be in the LXD cluster are still running vSphere with existing/legacy services on them. My plan was to migrate enough services over to new instances on LXD to free up the other two servers and bring them into the cluster ASAP.

The 4th server is a backup server with additional storage and already running LXD and some instances, but it will not be part of the LXD cluster.

A little bit of good news Whilst getting the logs for you, I chose an Ubuntu Desktop 20.04 x86_64 VM instance built from the images repo. All VMs are having this issue, however it actually started fine and it recovered with out me having to do ip link delete {dev}. It's now up with an IP which is displayed in lxc list. Something special with an Ubuntu instance in this scenario (of course).

Not so with Windows VMs.

daemon log For the daemon log, I did pre shutdown of a Windows VM and post start but they have the same content...

pre shutdown

time="2022-10-31T15:34:15-04:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
time="2022-10-31T15:34:23-04:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=0579cc32-cea0-44a1-9bfb-614c4b0f7d11 project= status=Success
time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=b36749d9-3590-45e2-9370-01beb5d5560b project= status=Success
time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=2073d73d-a990-461c-b36c-025debcfb13d project= status=Success
time="2022-11-03T12:18:18-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/windowsvm07/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
time="2022-11-03T17:02:34-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/windowsvm06/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
time="2022-11-03T17:11:02-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/ubuntudesktopvm2/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"

post start

time="2022-10-31T15:34:15-04:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
time="2022-10-31T15:34:23-04:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=0579cc32-cea0-44a1-9bfb-614c4b0f7d11 project= status=Success
time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=b36749d9-3590-45e2-9370-01beb5d5560b project= status=Success
time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=2073d73d-a990-461c-b36c-025debcfb13d project= status=Success
time="2022-11-03T12:18:18-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/windowsvm07/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
time="2022-11-03T17:02:34-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/windowsvm06/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
time="2022-11-03T17:11:02-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/ubuntudesktopvm2/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"

There is no qemu.log for an instance in this state even though it was still running. I thought it was because the log was deleted when I rebooted but I've checked against other instances in this state and they are the same... until they restart successfully, there is no qemu.log.

ls /var/snap/lxd/common/lxd/logs/windowsvm08/
qemu.conf  qemu.console  qemu.monitor  qemu.pid  qemu.spice  virtio-fs.config.sock  virtiofsd.pid
markrattray commented 1 year ago

Good morning.

For another Windows VM with this issue, I've taken a copy of the daemon log before shutting down, then after shutting down and the daemon log inspecting them the daemon log actually hasn't changed at all since my previous post. image

Trying to start it does the usual error:

root@server1:/home/someadmin# lxc start windowsvm09
    Error: Failed to start device "eth0": Failed to set the MAC address: Failed to run: ip link set dev mac28c3ec1e address 00:16:3e:f3:dd:b2: exit status 2 (RTNETLINK answers: Address already in use)
    Try `lxc info --show-log windowsvm09` for more info
root@server1:/home/someadmin# lxc info --show-log windowsvm09
    Name: windowsvm09
    Status: STOPPED
    Type: virtual-machine
    Architecture: x86_64
    Location: server1.domain.tld
    Created: 2022/08/17 16:54 EDT
    Last Used: 2022/10/27 17:19 EDT
    Error: open /var/snap/lxd/common/lxd/logs/windowsvm09/qemu.log: no such file or directory
root@server1:/home/someadmin# ip link show | grep -B 1 '00:16:3e:f3:dd:b2'
    34: macd6a193b1@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:f3:dd:b2 brd ff:ff:ff:ff:ff:ff
root@server1:/home/someadmin# ip link delete macd6a193b1
root@server1:/home/someadmin# lxc start ius01a-cxvw01
(back to normal for that VM instance for a few days)
...

Then for a VM using Ubuntu Server 22.04 x86_64 /cloud image from the default images remote repo, for Docker Swarm...

root@server1:/home/someadmin# lxc exec ubuntuservervm01 bash

root@ubuntuservervm01:~# lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 22.04.1 LTS
    Release:        22.04
    Codename:       jammy

root@ubuntuservervm01:~# ping 192.168.0.10
    ping: connect: Network is unreachable

root@ubuntuservervm01:~# ip link
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 00:16:3e:cc:ea:86 brd ff:ff:ff:ff:ff:ff
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
        link/ether 02:42:e4:b7:ef:ff brd ff:ff:ff:ff:ff:ff
    4: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
        link/ether 02:42:d0:ab:92:e4 brd ff:ff:ff:ff:ff:ff
    10: vethcdfcb02@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default
        link/ether a6:76:d2:7e:99:60 brd ff:ff:ff:ff:ff:ff link-netnsid 1

root@server1:/home/someadmin# ls /var/snap/lxd/common/lxd/logs/pd_ubuntuservervm01
    qemu.conf  qemu.console  qemu.monitor  qemu.pid  qemu.spice  virtio-fs.config.sock  virtiofsd.pid

root@server1:/home/someadmin# lxc stop ubuntuservervm01

root@server1:/home/someadmin# cat /var/snap/lxd/common/lxd/logs/lxd.log
    time="2022-10-31T15:34:15-04:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
    time="2022-10-31T15:34:23-04:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=0579cc32-cea0-44a1-9bfb-614c4b0f7d11 project= status=Success
    time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=b36749d9-3590-45e2-9370-01beb5d5560b project= status=Success
    time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=2073d73d-a990-461c-b36c-025debcfb13d project= status=Success
    time="2022-11-03T12:18:18-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/xxxxx/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
    time="2022-11-03T17:02:34-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/yyyyy/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
    time="2022-11-03T17:11:02-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/zzzzz/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
    time="2022-11-08T05:20:10-05:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/windowsvm09/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"

root@server1:/home/someadmin# lxc start ubuntuservervm01

root@server1:/home/someadmin# cat /var/snap/lxd/common/lxd/logs/lxd.log
    time="2022-10-31T15:34:15-04:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
    time="2022-10-31T15:34:23-04:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=0579cc32-cea0-44a1-9bfb-614c4b0f7d11 project= status=Success
    time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=b36749d9-3590-45e2-9370-01beb5d5560b project= status=Success
    time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=2073d73d-a990-461c-b36c-025debcfb13d project= status=Success
    time="2022-11-03T12:18:18-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/xxxxx/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
    time="2022-11-03T17:02:34-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/yyyyy/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
    time="2022-11-03T17:11:02-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/zzzzz/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
    time="2022-11-08T05:20:10-05:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/windowsvm09/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"

root@server1:/home/someadmin# cat /var/snap/lxd/common/lxd/logs/project_ubuntuservervm01/qemu.log
    warning: failed to register linux io_uring ring file descriptor
    qemu-system-x86_64: Issue while setting TUNSETSTEERINGEBPF: Invalid argument with fd: 46, prog_fd: -1

This is an internal environment so if it would help to have remote access and poke around let me know. There are some VMs still in this state and aren't being used yet.

markrattray commented 1 year ago

Following on from the above, just had the same error RTNETLINK answers: Address already in use)) when restart starting an Ubuntu Desktop 22.04 /cloud VM from the images remote repo, which had lost network connection. The Ubuntu Server 22.04 VM described above lost network was fine on reboot. So not all Ubuntu images recover from this on start/restart or it's got nothing to do with the image used.

markrattray commented 1 year ago

Good morning.

This still says incomplete, but I think I've answered everything... please let me know if you are still waiting for something from me.

Thanks

tomponline commented 1 year ago

The next time you lose connectivity please can you gather the output of lxc config show <instance> --expanded for the instance that is down, as well as sudo ps aux | grep qemu so we can see if perhaps the qemu process is dying unexpectedly and leaving its NIC in an unclean state.

tomponline commented 1 year ago

Also can you the gather the output of sudo dmesg to see if there are any out of memory killer scenarios happening when the VM goes down.

markrattray commented 1 year ago

Hi Tom

Thanks for getting back to me. Here is an Ubuntu Desktop 22.04 VM from the default images repo. It was restarted on 08/11 and then looks like it lost network later on 09/11. I will provide dmesg tomorrow.

> lxc config show ud2204-ivm-cxv01 --expanded

architecture: x86_64
config:
  cloud-init.user-data: |
    #cloud-config
    packages:
      - apt-transport-https
      - gpg
    package_upgrade: true
    timezone: America/New_York
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20220821_07:42)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20220821_07:42"
  image.type: disk-kvm.img
  image.variant: desktop
  limits.cpu: "6"
  limits.memory: 6GiB
  security.syscalls.intercept.sysinfo: "true"
  volatile.base_image: d7c196be900f47cbcc6167031bc1521ec31a11e6b117ebebbc6234f41fe57edf
  volatile.cloud-init.instance-id: add76d39-6ffa-43b9-8331-67b172686ff7
  volatile.eth0.host_name: mac271cd964
  volatile.eth0.hwaddr: 00:16:3e:f9:d2:d5
  volatile.eth0.last_state.created: "false"
  volatile.last_state.power: RUNNING
  volatile.uuid: 092a4884-128c-4b05-b4b5-876d322f9df9
  volatile.vsock_id: "37"
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: sp00
    size: 30GiB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

> ps aux | grep qemu

lxd         7422  1.3  0.3 18056744 1506848 ?    Sl   Oct27 353:24 /snap/lxd/23853/bin/qemu-system-x86_64 -S -name us2204-ivm-dkr02 -uuid a1a2cc0a-e3c9-44c2-9816-be90fb503bbe -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/pd_us2204-vm-dkr02/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/pd_us2204-vm-dkr02/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/pd_us2204-vm-dkr02/qemu.pid -D /var/snap/lxd/common/lxd/logs/pd_us2204-vm-dkr02/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd         8636  2.0  0.3 5292460 1457116 ?     Sl   Oct27 524:10 /snap/lxd/23853/bin/qemu-system-x86_64 -S -name us2204-dvm-gfs01 -uuid 18a30934-e904-4cd4-9a26-f7ead2198230 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/pd_us2204-vm-gfs01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/pd_us2204-vm-gfs01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/pd_us2204-vm-gfs01/qemu.pid -D /var/snap/lxd/common/lxd/logs/pd_us2204-vm-gfs01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd         9436  2.9  0.2 5373676 1161676 ?     Sl   Oct27 759:57 /snap/lxd/23853/bin/qemu-system-x86_64 -S -name us2204-dvm-gfs03 -uuid d0377c8a-fcd3-4924-9c16-d54be20eb7f8 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/pd_us2204-vm-gfs03/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/pd_us2204-vm-gfs03/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/pd_us2204-vm-gfs03/qemu.pid -D /var/snap/lxd/common/lxd/logs/pd_us2204-vm-gfs03/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd        10112  3.8  1.6 7479044 6369980 ?     Sl   Oct27 998:49 /snap/lxd/23853/bin/qemu-system-x86_64 -S -name mw2022-dvm-mad01 -uuid ccf95b32-7d6e-4deb-b3e7-e2feeb3ef273 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/pd_mw2022-vm-mad01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/pd_mw2022-vm-mad01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/pd_mw2022-vm-mad01/qemu.pid -D /var/snap/lxd/common/lxd/logs/pd_mw2022-vm-mad01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd        47843  4.1  1.6 7884564 6370356 ?     Sl   Oct27 1071:17 /snap/lxd/23853/bin/qemu-system-x86_64 -S -name mw2022-qvm-mad01 -uuid dad8e897-0c10-419c-8a62-db4d8ce3eba9 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/pq_mw2022-qvm-mad01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/pq_mw2022-qvm-mad01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/pq_mw2022-qvm-mad01/qemu.pid -D /var/snap/lxd/common/lxd/logs/pq_mw2022-qvm-mad01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
root       49740  0.0  0.0   6608  2328 pts/1    S+   16:22   0:00 grep --color=auto qemu
lxd       358876  4.0  1.5 7369300 6323896 ?     Sl   Nov08 371:08 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name mw2022-ivm-cxl01 -uuid 6605369c-07f7-4955-ad6f-c23fdff5c3a5 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mw2022-ivm-cxl01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mw2022-ivm-cxl01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mw2022-ivm-cxl01/qemu.pid -D /var/snap/lxd/common/lxd/logs/mw2022-ivm-cxl01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd       541995  5.2  1.5 7451736 6307292 ?     Sl   Nov08 474:33 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name mw2022-ivm-mis01 -uuid d2c01526-8337-4f7e-a295-f9b1e0d0a8ad -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mw2022-ivm-mis01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mw2022-ivm-mis01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mw2022-ivm-mis01/qemu.pid -D /var/snap/lxd/common/lxd/logs/mw2022-ivm-mis01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd       682653  4.5  1.5 7525292 6316276 ?     Sl   Nov08 409:53 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name mw2022-ivm-mad01 -uuid c117ec52-0316-43b5-bace-da68c630be95 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mw2022-ivm-mad01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mw2022-ivm-mad01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mw2022-ivm-mad01/qemu.pid -D /var/snap/lxd/common/lxd/logs/mw2022-ivm-mad01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd      1338572  4.2  1.5 7434064 6326224 ?     Sl   Nov08 382:18 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name mw2022-ivm-mfs01 -uuid b4aa897a-e2ea-4882-adb9-1b6319eef9b2 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mw2022-ivm-mfs01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mw2022-ivm-mfs01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mw2022-ivm-mfs01/qemu.pid -D /var/snap/lxd/common/lxd/logs/mw2022-ivm-mfs01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd      1455377  111  0.5 5310036 2271936 ?     Sl   Nov08 9964:31 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name bsdctx-ivm-adc99 -uuid 5d65dcbb-8e9c-4edb-b762-f670f29a18b8 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/bsdctx-ivm-adc99/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/bsdctx-ivm-adc99/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/bsdctx-ivm-adc99/qemu.pid -D /var/snap/lxd/common/lxd/logs/bsdctx-ivm-adc99/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd -bios /var/lib/snapd/hostfs/usr/share/seabios/bios-256k.bin -machine pc-q35-2.6
lxd      1591633  108  0.5 5298728 2283036 ?     Sl   Nov08 9727:35 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name bsdctx-ivm-adc01 -uuid a56461e6-bb8b-4223-bb7e-f2a197b59318 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/bsdctx-ivm-adc01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/bsdctx-ivm-adc01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/bsdctx-ivm-adc01/qemu.pid -D /var/snap/lxd/common/lxd/logs/bsdctx-ivm-adc01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd -bios /var/lib/snapd/hostfs/usr/share/seabios/bios-256k.bin -machine pc-q35-2.6
lxd      2360735  4.6  1.5 7437176 6297116 ?     Sl   Nov08 431:30 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name mw2022-ivm-cxv01 -uuid 916b5e77-d1ec-4515-9190-51ea7fa47333 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mw2022-ivm-cxv01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mw2022-ivm-cxv01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mw2022-ivm-cxv01/qemu.pid -D /var/snap/lxd/common/lxd/logs/mw2022-ivm-cxv01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd      2447690  103  4.2 18082408 16796572 ?   Sl   Nov08 9600:12 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name us2204-dvm-dkr01 -uuid 49e10686-c824-4324-9713-33199dd2f306 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/pd_us2204-dvm-dkr01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/pd_us2204-dvm-dkr01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/pd_us2204-dvm-dkr01/qemu.pid -D /var/snap/lxd/common/lxd/logs/pd_us2204-dvm-dkr01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd      3523199  9.0  1.3 7596888 5408700 ?     Sl   Nov04 1349:29 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name ud2204-ivm-cxv02 -uuid 12c0b79f-7518-4679-923d-bc8efc76463f -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv02/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv02/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv02/qemu.pid -D /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv02/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd      3677613  3.9  0.7 7561988 2997400 ?     Sl   Nov08 360:47 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name ud2204-ivm-cxv01 -uuid 092a4884-128c-4b05-b4b5-876d322f9df9 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv01/qemu.pid -D /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
lxd      3712780 15.2  1.5 7521444 6316028 ?     Sl   Nov08 1388:01 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name ud2204-ivm-cxc01 -uuid 771d0cc1-3e7a-4f97-8bdd-aeea2afd190e -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxc01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/ud2204-ivm-cxc01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxc01/qemu.pid -D /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxc01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd
tomponline commented 1 year ago

OK so its still running:

lxd      3677613  3.9  0.7 7561988 2997400 ?     Sl   Nov08 360:47 /snap/lxd/23889/bin/qemu-system-x86_64 -S -name ud2204-ivm-cxv01 -uuid 092a4884-128c-4b05-b4b5-876d322f9df9 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv01/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv01/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv01/qemu.pid -D /var/snap/lxd/common/lxd/logs/ud2204-ivm-cxv01/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd

So with the VM that has lost networking, can you still enter it using lxc exec <instance> -- bash?

tomponline commented 1 year ago

I wonder if the parent device eno1 is fluctuating.

markrattray commented 1 year ago

Good morning Tom.

Thanks for the quick response.

Re:

I wonder if the parent device eno1 is fluctuating.

I don't know as yet and don't notice it if it is. Good thinking.

Yes, lxc exec still works fine, even right now:

root@us2204-iph-lxd03:/home/someadmin# lxc exec ud2204-ivm-cxv01 bash
    root@ud2204-ivm-cxv01:~# ping 192.168.0.10
        ping: connect: Network is unreachable

For now, here is dmesg for that physical host dedicated to LXD, and right at the bottom are some device renaming activities if they are anything to worry about:

[Mon Nov 14 09:02:22 2022] phys6SXvGa: renamed from mac97d798dc
[Mon Nov 14 09:02:22 2022] eth0: renamed from phys6SXvGa
[Mon Nov 14 09:02:26 2022] physONAqZC: renamed from eth0
[Mon Nov 14 09:02:26 2022] macbb1031c3: renamed from physONAqZC

dmesg upload: dmesg_20221114-2016z_t_cleaned.txt

Weirdly for ud2204-ivm-cxv01, I cannot find either the virtual interface by name or MAC in the dmesg:

mac271cd964
00:16:3e:f9:d2:d5

For the Ubuntu Desktop VM example, I rebooted it:

reboot   system boot  5.15.0-52-generi Tue Nov  8 08:26   still running
reboot   system boot  5.15.0-52-generi Tue Nov  8 08:15 - 08:24  (00:08)

The only entries for that MACVLAN device in the physical server's syslog was at the time of the the reboot:

grep mac271cd964 us2204-iph-lxd03_syslog.1 
    Nov  8 08:26:14 us2204-iph-lxd03 systemd-networkd[3049]: mac271cd964: Link UP
    Nov  8 08:26:14 us2204-iph-lxd03 systemd-networkd[3049]: mac271cd964: Gained carrier
    Nov  8 08:26:16 us2204-iph-lxd03 systemd-networkd[3049]: mac271cd964: Gained IPv6LL

The physical host's syslog: us2204-iph-lxd03_syslog.1.zip

Then it stopped talking to the ISC DHCP service the next day. These are the last entries for it:

Nov  9 04:26:31 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.157 from 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 04:26:31 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.157 to 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 04:26:31 us2204-ict-dhi01 dhcpd[372]: Added new forward map from ud2204-ivm-cxv01.domain.tld to 192.168.0.157
Nov  9 04:26:31 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 157.0.168.192.0.168.192.in-addr.arpa. to ud2204-ivm-cxv01.domain.tld
Nov  9 06:56:32 us2204-ict-dhi01 dhcpd[372]:   host-name: ud2204-ivm-cxv01
Nov  9 06:56:32 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.157 from 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 06:56:32 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.157 to 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 06:56:32 us2204-ict-dhi01 dhcpd[372]: Added new forward map from ud2204-ivm-cxv01.domain.tld to 192.168.0.157
Nov  9 06:56:32 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 157.0.168.192.0.168.192.in-addr.arpa. to ud2204-ivm-cxv01.domain.tld
Nov  9 09:26:31 us2204-ict-dhi01 dhcpd[372]:   host-name: ud2204-ivm-cxv01
Nov  9 09:26:31 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.157 from 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 09:26:31 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.157 to 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 09:26:31 us2204-ict-dhi01 dhcpd[372]: Added new forward map from ud2204-ivm-cxv01.domain.tld to 192.168.0.157
Nov  9 09:26:31 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 157.0.168.192.0.168.192.in-addr.arpa. to ud2204-ivm-cxv01.domain.tld
Nov  9 11:56:31 us2204-ict-dhi01 dhcpd[372]:   host-name: ud2204-ivm-cxv01
Nov  9 11:56:31 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.157 from 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 11:56:31 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.157 to 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 11:56:31 us2204-ict-dhi01 dhcpd[372]: Added new forward map from ud2204-ivm-cxv01.domain.tld to 192.168.0.157
Nov  9 11:56:31 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 157.0.168.192.0.168.192.in-addr.arpa. to ud2204-ivm-cxv01.domain.tld
Nov  9 14:26:31 us2204-ict-dhi01 dhcpd[372]:   host-name: ud2204-ivm-cxv01
Nov  9 14:26:31 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.157 from 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 14:26:31 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.157 to 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 14:26:31 us2204-ict-dhi01 dhcpd[372]: Added new forward map from ud2204-ivm-cxv01.domain.tld to 192.168.0.157
Nov  9 14:26:31 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 157.0.168.192.0.168.192.in-addr.arpa. to ud2204-ivm-cxv01.domain.tld
Nov  9 16:56:32 us2204-ict-dhi01 dhcpd[372]:   host-name: ud2204-ivm-cxv01
Nov  9 16:56:32 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.157 from 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 16:56:32 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.157 to 00:16:3e:f9:d2:d5 (ud2204-ivm-cxv01) via eth0
Nov  9 16:56:32 us2204-ict-dhi01 dhcpd[372]: Added new forward map from ud2204-ivm-cxv01.domain.tld to 192.168.0.157
Nov  9 16:56:32 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 157.0.168.192.0.168.192.in-addr.arpa. to ud2204-ivm-cxv01.domain.tld
Nov  9 21:56:32 us2204-ict-dhi01 dhcpd[372]: Removed forward map from ud2204-ivm-cxv01.domain.tld to 192.168.0.157

As far as I can tell, ud2204-ivm-cxv01 might have started losing connection before 13h19 on 09 Nov based on these syslog entries in the instance, however there were DHCP transactions later on as shown above:

Nov  9 13:19:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Main process exited, code=exited, status=1/FAILURE
Nov  9 13:19:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Failed with result 'exit-code'.
Nov  9 13:21:32 ud2204-ivm-cxv01 systemd[1]: Started Citrix DotNet VDA Service.
Nov  9 13:21:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session starting for pid 18003.
Nov  9 13:21:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Process 18003 has named itself "citrix-ctxreg".
Nov  9 13:21:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session closing for pid 18003.
Nov  9 13:21:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Main process exited, code=exited, status=1/FAILURE
Nov  9 13:21:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Failed with result 'exit-code'.
Nov  9 13:22:24 ud2204-ivm-cxv01 nmbd[862]: [2022/11/09 13:22:24.013138,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
Nov  9 13:22:24 ud2204-ivm-cxv01 nmbd[862]:   *****
Nov  9 13:22:24 ud2204-ivm-cxv01 nmbd[862]:   
Nov  9 13:22:24 ud2204-ivm-cxv01 nmbd[862]:   Samba name server ud2204-ivm-cxv01 is now a local master browser for workgroup PLANBOX on subnet 192.168.0.157
Nov  9 13:22:24 ud2204-ivm-cxv01 nmbd[862]:   
Nov  9 13:22:24 ud2204-ivm-cxv01 nmbd[862]:   *****
Nov  9 13:23:32 ud2204-ivm-cxv01 systemd[1]: Started Citrix DotNet VDA Service.
Nov  9 13:23:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session starting for pid 18016.
Nov  9 13:23:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Process 18016 has named itself "citrix-ctxreg".
Nov  9 13:23:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session closing for pid 18016.
Nov  9 13:23:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Main process exited, code=exited, status=1/FAILURE
Nov  9 13:23:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Failed with result 'exit-code'.
Nov  9 13:24:17 ud2204-ivm-cxv01 systemd[1]: Starting Daily apt download activities...
Nov  9 13:24:47 ud2204-ivm-cxv01 apt-helper[18024]: E: Sub-process nm-online returned an error code (1)
Nov  9 13:24:56 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 192.168.0.12.
Nov  9 13:25:01 ud2204-ivm-cxv01 kernel: [104318.953301] audit: type=1400 audit(1668018301.089:2234): apparmor="ALLOWED" operation="open" profile="/usr/sbin/sssd" name="/proc/18077/cmdline" pid=854 comm="sssd_nss" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Nov  9 13:25:01 ud2204-ivm-cxv01 CRON[18078]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Nov  9 13:25:01 ud2204-ivm-cxv01 kernel: [104318.955190] audit: type=1400 audit(1668018301.093:2235): apparmor="ALLOWED" operation="open" profile="/usr/sbin/sssd" name="/proc/18078/cmdline" pid=854 comm="sssd_nss" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Nov  9 13:25:05 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:25:32 ud2204-ivm-cxv01 systemd[1]: Started Citrix DotNet VDA Service.
Nov  9 13:25:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session starting for pid 18085.
Nov  9 13:25:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Process 18085 has named itself "citrix-ctxreg".
Nov  9 13:25:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session closing for pid 18085.
Nov  9 13:25:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Main process exited, code=exited, status=1/FAILURE
Nov  9 13:25:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Failed with result 'exit-code'.
Nov  9 13:25:33 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:25:36 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:26:04 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:26:07 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:26:34 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:26:38 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:27:05 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:27:08 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:27:32 ud2204-ivm-cxv01 systemd[1]: Started Citrix DotNet VDA Service.
Nov  9 13:27:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session starting for pid 18100.
Nov  9 13:27:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Process 18100 has named itself "citrix-ctxreg".
Nov  9 13:27:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session closing for pid 18100.
Nov  9 13:27:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Main process exited, code=exited, status=1/FAILURE
Nov  9 13:27:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Failed with result 'exit-code'.
Nov  9 13:27:36 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:27:39 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:28:00 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:28:04 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:28:31 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:28:34 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:29:02 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:29:05 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:29:30 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.
Nov  9 13:29:32 ud2204-ivm-cxv01 systemd[1]: Started Citrix DotNet VDA Service.
Nov  9 13:29:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session starting for pid 18113.
Nov  9 13:29:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Process 18113 has named itself "citrix-ctxreg".
Nov  9 13:29:32 ud2204-ivm-cxv01 citrix-ctxlogd[656]: Session closing for pid 18113.
Nov  9 13:29:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Main process exited, code=exited, status=1/FAILURE
Nov  9 13:29:32 ud2204-ivm-cxv01 systemd[1]: ctxvda.service: Failed with result 'exit-code'.
Nov  9 13:29:33 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
Nov  9 13:30:00 ud2204-ivm-cxv01 systemd-resolved[579]: Using degraded feature set UDP instead of TCP for DNS server 192.168.0.12.

I have to move onto some other work for now, so will take a fresh look later on. I will also rig something up tomorrow so we know exactly when a VM looses network.

tomponline commented 1 year ago

I have to move onto some other work for now, so will take a fresh look later on. I will also rig something up tomorrow so we know exactly when a VM looses network.

That would be useful, along with the contents of dmesg on both the host and the guest so we can correlate any issues.

markrattray commented 1 year ago

Good morning

So I did the VM and Load Balancer monitoring setup and this detected a failure within 10h30m of instance launch.

Setup

At 10h14 UTC on 16 Nov 22, I created a very standard VM and just installed nginx in it:

lxc init images:ubuntu/22.04/cloud us2204-ivm-webnetmonitor01 --vm -c limits.memory=1GiB -c limits.cpu=2
lxc config device override us2204-ivm-webnetmonitor01 root size=15GiB
lxc start us2204-ivm-webnetmonitor01

lxc exec us2204-ivm-webnetmonitor01 bash
apt update && apt install nginx

Checked that the default web page was up. On a 2 node synchronous load balancer cluster:

Outage

Both the load balancer nodes detected the loss of network for us2204-ivm-webnetmonitor01 at 16:29:03 EST / 21:29:03 UTC on 16 Nov and both instances alerted me.

Logs

LXD host us2204-iph-lxd03 dmesg -T us2204-iph-lxd03_dmesg-T_20221116_cleaned.txt

LXD host us2204-iph-lxd03 syslog 16/17 Nov us2204-iph-lxd03_syslog_20221116_cleaned.zip

VM us2204-ivm-webnetmonitor01 dmesg -T, on LXD host us2204-iph-lxd03 us2204-ivm-webnetmonitor_dmesg-T_20221116_cleaned.txt

VM us2204-ivm-webnetmonitor01 syslog, on LXD host us2204-iph-lxd03 us2204-ivm-webnetmonitor_syslog_20221116_cleaned.txt

Time zone change in VM instance logs The VM's logs start out in UTC then switch to EST, due to the location and the cloud-init config in the default profile.

**DHCP system container logs*** Interestingly there were some attempts with the DHCP server (a LXD system container on the same host) following the outage but the instance still remains off the network:

root@us2204-ict-dhi01:~# date
    Thu Nov 17 06:10:07 EST 2022
root@us2204-ict-dhi01:~# grep webnetmonitor /var/log/syslog
    Nov 16 13:29:49 us2204-ict-dhi01 dhcpd[372]:   host-name: us2204-ivm-webnetmonitor01
    Nov 16 13:29:49 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.103 from 00:16:3e:90:94:89 (us2204-ivm-webnetmonitor01) via eth0
    Nov 16 13:29:49 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.103 to 00:16:3e:90:94:89 (us2204-ivm-webnetmonitor01) via eth0
    Nov 16 13:29:49 us2204-ict-dhi01 dhcpd[372]: Added new forward map from us2204-ivm-webnetmonitor01.domain.tld to 192.168.0.103
    Nov 16 13:29:49 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 103.0.168.192.0.168.192.in-addr.arpa. to us2204-ivm-webnetmonitor01.domain.tld
    Nov 16 15:59:49 us2204-ict-dhi01 dhcpd[372]:   host-name: us2204-ivm-webnetmonitor01
    Nov 16 15:59:49 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.103 from 00:16:3e:90:94:89 (us2204-ivm-webnetmonitor01) via eth0
    Nov 16 15:59:49 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.103 to 00:16:3e:90:94:89 (us2204-ivm-webnetmonitor01) via eth0
    Nov 16 15:59:49 us2204-ict-dhi01 dhcpd[372]: Added new forward map from us2204-ivm-webnetmonitor01.domain.tld to 192.168.0.103
    Nov 16 15:59:49 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 103.0.168.192.0.168.192.in-addr.arpa. to us2204-ivm-webnetmonitor01.domain.tld
    Nov 16 18:29:48 us2204-ict-dhi01 dhcpd[372]:   host-name: us2204-ivm-webnetmonitor01
    Nov 16 18:29:48 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.103 from 00:16:3e:90:94:89 (us2204-ivm-webnetmonitor01) via eth0
    Nov 16 18:29:48 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.103 to 00:16:3e:90:94:89 (us2204-ivm-webnetmonitor01) via eth0
    Nov 16 18:29:48 us2204-ict-dhi01 dhcpd[372]: Added new forward map from us2204-ivm-webnetmonitor01.domain.tld to 192.168.0.103
    Nov 16 18:29:48 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 103.0.168.192.0.168.192.in-addr.arpa. to us2204-ivm-webnetmonitor01.domain.tld
    Nov 16 20:59:48 us2204-ict-dhi01 dhcpd[372]:   host-name: us2204-ivm-webnetmonitor01
    Nov 16 20:59:48 us2204-ict-dhi01 dhcpd[372]: DHCPREQUEST for 192.168.0.103 from 00:16:3e:90:94:89 (us2204-ivm-webnetmonitor01) via eth0
    Nov 16 20:59:48 us2204-ict-dhi01 dhcpd[372]: DHCPACK on 192.168.0.103 to 00:16:3e:90:94:89 (us2204-ivm-webnetmonitor01) via eth0
    Nov 16 20:59:48 us2204-ict-dhi01 dhcpd[372]: Added new forward map from us2204-ivm-webnetmonitor01.domain.tld to 192.168.0.103
    Nov 16 20:59:48 us2204-ict-dhi01 dhcpd[372]: Added reverse map from 103.0.168.192.0.168.192.in-addr.arpa. to us2204-ivm-webnetmonitor01.domain.tld
    Nov 17 01:59:48 us2204-ict-dhi01 dhcpd[372]: Removed forward map from us2204-ivm-webnetmonitor01.domain.tld to 192.168.0.103

Thanks.

markrattray commented 1 year ago

Forgot the lxc config show for the VM:

lxc config show us2204-ivm-webnetmonitor01 --expanded
    architecture: x86_64
    config:
      cloud-init.user-data: |
        #cloud-config
        packages:
          - apt-transport-https
          - gpg
        package_upgrade: true
        timezone: America/New_York
      image.architecture: amd64
      image.description: Ubuntu jammy amd64 (20221115_07:42)
      image.os: Ubuntu
      image.release: jammy
      image.serial: "20221115_07:42"
      image.type: disk-kvm.img
      image.variant: cloud
      limits.cpu: "2"
      limits.memory: 1GiB
      security.syscalls.intercept.sysinfo: "true"
      volatile.base_image: ec5544c7adf0ec0ec4cc6fb2ad53ae0b516acbb9f23a8ba7aede3b54352e419a
      volatile.cloud-init.instance-id: 07855ee4-1c07-4660-992d-6a669cb20f75
      volatile.eth0.host_name: mac42a83eba
      volatile.eth0.hwaddr: 00:16:3e:90:94:89
      volatile.eth0.last_state.created: "false"
      volatile.last_state.power: RUNNING
      volatile.uuid: de264cd7-5dbe-486b-8b71-876021bb2561
      volatile.vsock_id: "116"
    devices:
      eth0:
        name: eth0
        nictype: macvlan
        parent: eno1
        type: nic
      root:
        path: /
        pool: sp00
        size: 15GiB
        type: disk
    ephemeral: false
    profiles:
    - default
    stateful: false
    description: ""
tomponline commented 1 year ago

Well I can see at least one LXD crash in their due to the metrics API endpoint, that needs fixing:

Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: fatal error: concurrent map read and map write
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384626 [running]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: runtime.throw({0x1c45bfb?, 0x44?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/panic.go:992 +0x71 fp=0xc000d69d40 sp=0xc000d69d10 pc=0x441411
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: runtime.mapaccess1_faststr(0xc001a34900?, 0xc0000bbde0?, {0xc000528077, 0x7})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/map_faststr.go:22 +0x3a5 fp=0xc000d69da8 sp=0xc000d69d40 pc=0x41e345
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ac28, 0xc001a34900})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:199 +0x1ca fp=0xc000d69fc0 sp=0xc000d69da8 pc=0x165230a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func4()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:200 +0x2e fp=0xc000d69fe0 sp=0xc000d69fc0 pc=0x165210e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: runtime.goexit()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000d69fe8 sp=0xc000d69fe0 pc=0x474b21
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.metricsGet
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:186 +0xe65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.(*cmdDaemon).Run(0xc0003a0d68, 0x0?, {0xc0003929c0, 0x0, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/main_daemon.go:83 +0x63f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/spf13/cobra.(*Command).execute(0xc000194000, {0xc000114060, 0x4, 0x4})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/spf13/cobra@v1.6.0/command.go:916 +0x862
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/spf13/cobra.(*Command).ExecuteC(0xc000194000)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/spf13/cobra@v1.6.0/command.go:1040 +0x3b4
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/spf13/cobra.(*Command).Execute(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/spf13/cobra@v1.6.0/command.go:968
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.main()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/main.go:220 +0x1a49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 11 [syscall, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os/signal.signal_recv()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/sigqueue.go:151 +0x2f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os/signal.loop()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/signal/signal_unix.go:23 +0x19
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by os/signal.Notify.func1.1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/signal/signal.go:151 +0x2a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1955 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 12 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: database/sql.(*DB).connectionOpener(0xc0000f2680, {0x1f6ec28, 0xc000392bc0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:1226 +0x8d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by database/sql.OpenDB
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:794 +0x18d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 9848 [syscall, 137 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x30, 0xc000f6e400, 0x7f39a0003520, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x5fcd00?, 0x200000003?, 0xc0005fcd00?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x30, 0x3, 0x2, 0xc000d6dec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000b44af0, 0x1c0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1690 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c858, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc001011b80?, 0xc000737497?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc001011b80, {0xc000737497, 0xb69, 0xb69})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc001011b80, {0xc000737497?, 0x443fe0?, 0xc001899d40?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000141030, {0xc000737497?, 0xc0003e4e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0003e4f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc0003a7f40, {0x1f64720?, 0xc000141030?}, 0xc001710b40, 0xc001710ba0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1529 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 3281 [syscall, 338 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x25, 0xc000872c00, 0x7f39b0006290, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x5fd1e0?, 0x200000003?, 0xc0005fd1e0?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x25, 0x3, 0x2, 0xc000ef5ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00133d450, 0xc0008c7d70?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 999 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c948, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc00039b800?, 0xc0012ee2af?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc00039b800, {0xc0012ee2af, 0xd51, 0xd51})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc00039b800, {0xc0012ee2af?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc0003982a8, {0xc0012ee2af?, 0xc0008abe08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0008abf08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc001522910, {0x1f64720?, 0xc0003982a8?}, 0xc000a811a0, 0xc000a81200)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1622 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e3b0, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc001010000?, 0xc0017cd28f?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc001010000, {0xc0017cd28f, 0xd71, 0xd71})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc001010000, {0xc0017cd28f?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000140830, {0xc0017cd28f?, 0xc0003e8e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0003e8f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc0003a75e0, {0x1f64720?, 0xc000140830?}, 0xc0003dfec0, 0xc0003dff20)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 120 [chan receive, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/cluster.runDqliteProxy(0xc000117200, {0xc00038fb34, 0x6}, 0xc00010e130?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1140 +0x46
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/cluster.(*Gateway).init
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cluster/gateway.go:809 +0x591
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2230 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c3a8, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000f0cd00?, 0xc000ad42af?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000f0cd00, {0xc000ad42af, 0xd51, 0xd51})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000f0cd00, {0xc000ad42af?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000408d20, {0xc000ad42af?, 0xc000745e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc000745f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc00133ce60, {0x1f64720?, 0xc000408d20?}, 0xc000a79ce0, 0xc000a79d40)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 129 [IO wait, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44ec20, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000395920?, 0x0?, 0x1)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).RawRead(0xc000395920, 0xc001020ca0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:766 +0x145
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os.(*rawConn).Read(0xc000398f48, 0x1?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/rawconn.go:31 +0x56
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/mdlayher/socket.(*Conn).read(0xc0009e0c90, {0x1bf0cde?, 0x0?}, 0xc000ebfe30)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/mdlayher/socket@v0.2.3/conn.go:576 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/mdlayher/socket.(*Conn).Accept(0xc0009e0c90, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/mdlayher/socket@v0.2.3/conn.go:313 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/mdlayher/vsock.(*listener).Accept(0xc0009ec130)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/mdlayher/vsock@v1.1.1/listener_linux.go:32 +0x2a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/mdlayher/vsock.(*Listener).Accept(0xc000398f50)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/mdlayher/vsock@v1.1.1/vsock.go:133 +0x25
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*listener).Accept(0xc0003a1800)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/tls.go:66 +0x2d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Server).Serve(0xc0001a08c0, {0x1f6b8a8, 0xc0003a1800})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:3039 +0x385
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/endpoints.(*Endpoints).serve.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/endpoints/endpoints.go:440 +0x25
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: gopkg.in/tomb%2ev2.(*Tomb).run(0xc0001159f0, 0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163 +0x36
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by gopkg.in/tomb%2ev2.(*Tomb).Go
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:159 +0xee
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 130 [IO wait, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44eef0, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0009a1e80?, 0x2?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Accept(0xc0009a1e80)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:614 +0x22c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).accept(0xc0009a1e80)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_unix.go:172 +0x35
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*UnixListener).accept(0x4a2fa6?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/unixsock_posix.go:166 +0x1c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*UnixListener).Accept(0xc0009e0c60)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/unixsock.go:260 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Server).Serve(0xc0001a0620, {0x1f6d9d8, 0xc0009e0c60})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:3039 +0x385
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/endpoints.(*Endpoints).serve.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/endpoints/endpoints.go:440 +0x25
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: gopkg.in/tomb%2ev2.(*Tomb).run(0xc0001159f0, 0xc00010e130?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163 +0x36
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by gopkg.in/tomb%2ev2.(*Tomb).Go
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:159 +0xee
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 131 [IO wait, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44ed10, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0009a1d00?, 0x7d?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Accept(0xc0009a1d00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:614 +0x22c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).accept(0xc0009a1d00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_unix.go:172 +0x35
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*UnixListener).accept(0xc000e2ee70?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/unixsock_posix.go:166 +0x1c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*UnixListener).Accept(0xc0009e0930)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/unixsock.go:260 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/endpoints/listeners.(*StarttlsListener).Accept(0xc0009e0990)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/endpoints/listeners/starttls.go:36 +0x64
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Server).Serve(0xc0001a0540, {0x1f6c6b8, 0xc0009e0990})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:3039 +0x385
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/endpoints.(*Endpoints).serve.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/endpoints/endpoints.go:440 +0x25
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: gopkg.in/tomb%2ev2.(*Tomb).run(0xc0001159f0, 0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163 +0x36
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by gopkg.in/tomb%2ev2.(*Tomb).Go
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:159 +0xee
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 132 [IO wait, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44ee00, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0009ee080?, 0xc000054500?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Accept(0xc0009ee080)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:614 +0x22c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).accept(0xc0009ee080)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_unix.go:172 +0x35
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*TCPListener).accept(0xc0003a1830)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/tcpsock_posix.go:139 +0x28
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*TCPListener).Accept(0xc0003a1830)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/tcpsock.go:288 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/endpoints/listeners.(*FancyTLSListener).Accept(0xc000115860)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/endpoints/listeners/fancytls.go:37 +0x5e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Server).Serve(0xc0001a0540, {0x1f6c688, 0xc000115860})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:3039 +0x385
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/endpoints.(*Endpoints).serve.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/endpoints/endpoints.go:440 +0x25
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: gopkg.in/tomb%2ev2.(*Tomb).run(0xc0001159f0, 0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163 +0x36
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by gopkg.in/tomb%2ev2.(*Tomb).Go
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:159 +0xee
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 133 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: database/sql.(*DB).connectionOpener(0xc0009a5930, {0x1f6ec28, 0xc000412700})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:1226 +0x8d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by database/sql.OpenDB
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:794 +0x18d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1713 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2313 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc001418e28, {0x1f6ec28, 0xc001628f00})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0xc0009e1b30?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 162 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/cluster.dqliteProxy({0x1bed681, 0x6}, 0xc000117200, {0x1f769c0, 0xc000ab4000}, {0x1f79300, 0xc0000103b0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1184 +0x6d7
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/cluster.runDqliteProxy
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1146 +0x12c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 163 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e950, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000a94180?, 0xc0003fa000?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000a94180, {0xc0003fa000, 0x675, 0x675})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000a94180, {0xc0003fa000?, 0xc0003f01c0?, 0xc0003fa005?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc00080e028, {0xc0003fa000?, 0x43c4ac?, 0x1d1d4a8?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*atLeastReader).Read(0xc0003fe768, {0xc0003fa000?, 0x0?, 0xc000802d00?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:785 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bytes.(*Buffer).ReadFrom(0xc000ab4278, {0x1f5dba0, 0xc0003fe768})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bytes/buffer.go:204 +0x98
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readFromUntil(0xc000ab4000, {0x1f646e0?, 0xc00080e028}, 0x675?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:807 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecordOrCCS(0xc000ab4000, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:614 +0x116
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecord(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:582
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).Read(0xc000ab4000, {0xc000b14000, 0x8000, 0x8000?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1285 +0x16f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: io.copyBuffer({0x1f64740, 0xc0000103b0}, {0x1f5db60, 0xc000ab4000}, {0x0, 0x0, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/io/io.go:426 +0x1b2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: io.Copy(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/io/io.go:385
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/cluster.dqliteProxy.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1173 +0x8a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/cluster.dqliteProxy
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1172 +0x58a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 164 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e860, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000b9a300?, 0xc000bc4000?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000b9a300, {0xc000bc4000, 0x8000, 0x8000})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000b9a300, {0xc000bc4000?, 0xc000ab41e8?, 0x63fe20?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc0000103b0, {0xc000bc4000?, 0x18?, 0x8000?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: io.copyBuffer({0x1f5db80, 0xc000ab4000}, {0x1f64720, 0xc0000103b0}, {0x0, 0x0, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/io/io.go:426 +0x1b2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: io.Copy(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/io/io.go:385
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/cluster.dqliteProxy.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1178 +0x8a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/cluster.dqliteProxy
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1177 +0x62a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1126 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 3304 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*http2serverConn).serve(0xc0003eec00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:4583 +0x88c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*http2Server).ServeConn(0xc000a02000, {0x1f769c0?, 0xc000443500}, 0xc000fcfb20)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:4185 +0x991
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.http2ConfigureServer.func1(0xc0001a0540, 0x1f769c0?, {0x1f66660, 0xc00111b340})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:4008 +0xdd
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*conn).serve(0xc000520aa0, {0x1f6ecd0, 0xc000c33e90})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:1874 +0x1293
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by net/http.(*Server).Serve
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:3071 +0x4db
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1112 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e590, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0000fcf80?, 0xc0010ae2af?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc0000fcf80, {0xc0010ae2af, 0xd51, 0xd51})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc0000fcf80, {0xc0010ae2af?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc0003983d0, {0xc0010ae2af?, 0xc0000cce08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0000ccf08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc001523400, {0x1f64720?, 0xc0003983d0?}, 0xc001659140, 0xc0016591a0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1113 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1189 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 3142 [syscall]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x8, 0xc00159ec00, 0x7f3900022a50, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x902b60?, 0x200000003?, 0xc000902b60?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x8, 0x3, 0x2, 0xc001aebec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc001800e60, 0xb0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2255 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).getEvents.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd_events.go:78 +0xc5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/client.(*ProtocolLXD).getEvents
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd_events.go:76 +0x453
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1417 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1125 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e770, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000b9a180?, 0xc0014fe59b?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000b9a180, {0xc0014fe59b, 0xa65, 0xa65})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000b9a180, {0xc0014fe59b?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc00080e3c0, {0xc0014fe59b?, 0xc0000cde08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0000cdf08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc000a8b860, {0x1f64720?, 0xc00080e3c0?}, 0xc00004bf80, 0xc00069a000)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 676 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 382 [syscall, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: syscall.Syscall(0x0, 0x21, 0xc000af2000, 0x1000)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/syscall/asm_linux_amd64.s:20 +0x5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: golang.org/x/sys/unix.read(0x0?, {0xc000af2000?, 0x0?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/golang.org/x/sys@v0.1.0/unix/zsyscall_linux.go:1366 +0x4d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: golang.org/x/sys/unix.Read(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/golang.org/x/sys@v0.1.0/unix/syscall_unix.go:151
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.deviceNetlinkListener.func1(0xc000ac1da0?, 0xc000a80a20?, 0xc000a80a80?, 0xc000a80ae0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/devices.go:100 +0xa6
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.deviceNetlinkListener
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/devices.go:97 +0x1ca
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1954 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c588, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0001a3980?, 0xc000c472af?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc0001a3980, {0xc000c472af, 0xd51, 0xd51})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc0001a3980, {0xc000c472af?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000408820, {0xc000c472af?, 0xc0008aae08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0008aaf08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc001133090, {0x1f64720?, 0xc000408820?}, 0xc00004b140, 0xc00004b2c0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 670 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e2c0, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc001010c80?, 0xc00129c40d?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc001010c80, {0xc00129c40d, 0xbf3, 0xbf3})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc001010c80, {0xc00129c40d?, 0x443fe0?, 0xc0014a21a0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc001014070, {0xc00129c40d?, 0xc0012d1e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0012d1f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc001264320, {0x1f64720?, 0xc001014070?}, 0xc0003de660, 0xc0003de6c0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 616 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384621 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).getConn(0xc000590000, 0xc00132a480, {{}, 0x0, {0xc001328930, 0x5}, {0xc00242ba40, 0x11}, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1375 +0x5c6
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).roundTrip(0xc000590000, 0xc0002a6500)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:581 +0x76f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).RoundTrip(0x0?, 0x1f647e0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/roundtrip.go:17 +0x19
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.send(0xc0002a6500, {0x1f647e0, 0xc000590000}, {0x1ba7e40?, 0x422b01?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/client.go:252 +0x5d8
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Client).send(0xc0000eec60, 0xc0002a6500, {0x203000?, 0x20?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/client.go:176 +0x9b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Client).do(0xc0000eec60, 0xc0002a6500)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/client.go:725 +0x8f5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Client).Do(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/client.go:593
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).DoHTTP(0xc0007ca300, 0xc000128000?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd.go:155 +0x5d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).rawQuery(0xc0007ca300, {0x1be6c71, 0x3}, {0xc001328930, 0x19}, {0x0, 0x0}, {0x0, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd.go:293 +0x810
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).query(0xc0007ca300, {0x1be6c71, 0x3}, {0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd.go:345 +0x145
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).queryStruct(0xc000e12b70?, {0x1be6c71, 0x3}, {0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, {0x1935f20, ...})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd.go:349 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).GetServer(0xc0007ca300)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd_server.go:21 +0x6a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.ConnectLXDHTTPWithContext({0x1f6ec60, 0xc000128000}, 0x0, 0xc0000eec60)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/connection.go:130 +0x24e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.ConnectLXDHTTP(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/connection.go:94
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*qemu).getAgentMetrics(0xc0013ca160)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_qemu.go:6376 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*qemu).Metrics(0xc0013ca160)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_qemu.go:6364 +0x5f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ae70, 0xc0013ca160})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:189 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.metricsGet
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:186 +0xe65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2388 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/events.(*listenerCommon).Wait(0xc0010506e0, {0x1f6ecd0?, 0xc000ebef60?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/events/common.go:52 +0x8a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.eventsSocket(0xc000400600, 0xc0002a7600, {0x1f6dbb8, 0xc001040460})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/events.go:148 +0x905
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.(*eventsServe).Render(0x1d1c6b0?, {0x1f6dbb8?, 0xc001040460?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/events.go:36 +0x36
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.(*Daemon).createCmd.func1({0x1f6dbb8, 0xc001040460}, 0xc0002a7600)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/daemon.go:716 +0x17c2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.HandlerFunc.ServeHTTP(0xc0002a7300?, {0x1f6dbb8?, 0xc001040460?}, 0x1be6856?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:2084 +0x2f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/mux.(*Router).ServeHTTP(0xc000628540, {0x1f6dbb8, 0xc001040460}, 0xc0002a7200)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210 +0x1cf
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.(*lxdHttpServer).ServeHTTP(0xc00098b790, {0x1f6dbb8, 0xc001040460}, 0xc0002a7200)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api.go:302 +0xdc
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.serverHandler.ServeHTTP({0xc000ebe6c0?}, {0x1f6dbb8, 0xc001040460}, 0xc0002a7200)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:2916 +0x43b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*conn).serve(0xc000520280, {0x1f6ecd0, 0xc000dfe090})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:1966 +0x5d7
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by net/http.(*Server).Serve
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:3071 +0x4db
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2158 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e0e0, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000f0d700?, 0xc000ae635b?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000f0d700, {0xc000ae635b, 0xca5, 0xca5})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000f0d700, {0xc000ae635b?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000408e58, {0xc000ae635b?, 0xc00173ae08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc00173af08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc00133d360, {0x1f64720?, 0xc000408e58?}, 0xc0003df620, 0xc0003df680)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 615 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44efe0, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000a94880?, 0xc0004ae4f9?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000a94880, {0xc0004ae4f9, 0xb07, 0xb07})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000a94880, {0xc0004ae4f9?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000010428, {0xc0004ae4f9?, 0xc000694e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc000694f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc000e92230, {0x1f64720?, 0xc000010428?}, 0xc0005feea0, 0xc0005fefc0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1188 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e680, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0009a1680?, 0xc0010af2af?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc0009a1680, {0xc0010af2af, 0xd51, 0xd51})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc0009a1680, {0xc0010af2af?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc00080e288, {0xc0010af2af?, 0xc000696e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc000696f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc000a8b0e0, {0x1f64720?, 0xc00080e288?}, 0xc000b5b320, 0xc000b5b380)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 151730 [syscall, 121 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x50, 0xc001773c00, 0x7f39940073e0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x440680?, 0x200000003?, 0xc000440680?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x50, 0x3, 0x2, 0xc000ef3ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc001801cc0, 0x7?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1990 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c678, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0012c2700?, 0xc00037a2af?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc0012c2700, {0xc00037a2af, 0xd51, 0xd51})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc0012c2700, {0xc00037a2af?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000408978, {0xc00037a2af?, 0xc000742e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc000742f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc0011338b0, {0x1f64720?, 0xc000408978?}, 0xc001711080, 0xc0017110e0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1991 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1712 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c768, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0002e0a80?, 0xc000843010?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc0002e0a80, {0xc000843010, 0xff0, 0xff0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc0002e0a80, {0xc000843010?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc0001410a8, {0xc000843010?, 0xc000692e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc000692f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc0015223c0, {0x1f64720?, 0xc0001410a8?}, 0xc0001172c0, 0xc000117320)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 456 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.deviceEventListener(0xc0006e00b0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/devices.go:538 +0x212
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.(*Daemon).init
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/daemon.go:1457 +0x3f75
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 457 [chan receive, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/fsmonitor/drivers.(*fanotify).load.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/fsmonitor/drivers/driver_fanotify.go:62 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/fsmonitor/drivers.(*fanotify).load
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/fsmonitor/drivers/driver_fanotify.go:61 +0x2c5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 458 [syscall, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: syscall.Syscall(0x0, 0x20, 0xc00038c500, 0x100)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/syscall/asm_linux_amd64.s:20 +0x5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: golang.org/x/sys/unix.read(0x0?, {0xc00038c500?, 0x0?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/golang.org/x/sys@v0.1.0/unix/zsyscall_linux.go:1366 +0x4d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: golang.org/x/sys/unix.Read(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/golang.org/x/sys@v0.1.0/unix/syscall_unix.go:151
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/fsmonitor/drivers.(*fanotify).getEvents(0xc0006b4640, 0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/fsmonitor/drivers/driver_fanotify.go:82 +0x74
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/fsmonitor/drivers.(*fanotify).load
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/fsmonitor/drivers/driver_fanotify.go:67 +0x313
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 675 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e4a0, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc00084a880?, 0xc00057d4b0?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc00084a880, {0xc00057d4b0, 0xb50, 0xb50})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc00084a880, {0xc00057d4b0?, 0x443fe0?, 0xc0014a21a0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000141038, {0xc00057d4b0?, 0xc000695e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc000695f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc000b357c0, {0x1f64720?, 0xc000141038?}, 0xc00004b1a0, 0xc00004b200)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2231 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2115 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c1c8, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0000fd900?, 0xc0007ac28f?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc0000fd900, {0xc0007ac28f, 0xd71, 0xd71})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc0000fd900, {0xc0007ac28f?, 0x443fe0?, 0xc001025380?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000140080, {0xc0007ac28f?, 0xc000743e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc000743f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc000d8d2c0, {0x1f64720?, 0xc000140080?}, 0xc0003de2a0, 0xc0003de300)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1000 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 671 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1416 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44e1d0, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000f0c400?, 0xc000eec266?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000f0c400, {0xc000eec266, 0xd9a, 0xd9a})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000f0c400, {0xc000eec266?, 0x443fe0?, 0xc0022ffd40?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000398748, {0xc000eec266?, 0xc0003eae08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0003eaf08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc000a1b450, {0x1f64720?, 0xc000398748?}, 0xc000a79140, 0xc000a791a0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2116 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1528 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00ca38, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000e0f580?, 0xc00054b266?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000e0f580, {0xc00054b266, 0xd9a, 0xd9a})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000e0f580, {0xc00054b266?, 0x443fe0?, 0xc0022ffd40?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc0000106d0, {0xc00054b266?, 0xc0008a9e08?, 0x41136d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Scanner).Scan(0xc0008a9f08)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/scan.go:215 +0x865
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).listen(0xc000d8d9a0, {0x1f64720?, 0xc0000106d0?}, 0xc0003df0e0, 0xc0003df140)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:175 +0x10b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/digitalocean/go-qemu/qmp.(*SocketMonitor).Connect
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/digitalocean/go-qemu@v0.0.0-20220826173844-d5f5e3ceed89/qmp/socket.go:151 +0x358
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2159 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1623 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1691 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:91 +0xf9
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/instance/drivers/qmp.(*Monitor).start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/qmp/monitor.go:85 +0xea
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2256 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cd44eb30, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0008ba200?, 0xc000cf0000?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc0008ba200, {0xc000cf0000, 0x9c32, 0x9c32})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc0008ba200, {0xc000cf0000?, 0x0?, 0x7f39f5a93948?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc00152c010, {0xc000cf0000?, 0xc0006bc798?, 0x4703d9?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*atLeastReader).Read(0xc002178a80, {0xc000cf0000?, 0x0?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:785 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bytes.(*Buffer).ReadFrom(0xc0009c0278, {0x1f5dba0, 0xc002178a80})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bytes/buffer.go:204 +0x98
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readFromUntil(0xc0009c0000, {0x1f646e0?, 0xc00152c010}, 0x741411?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:807 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecordOrCCS(0xc0009c0000, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:614 +0x116
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecord(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:582
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).Read(0xc0009c0000, {0xc0007ad000, 0x1000, 0x7f39cc06bd48?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1285 +0x16f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Reader).fill(0xc0008cd500)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/bufio.go:106 +0x103
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Reader).Peek(0xc0008cd500, 0x2)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/bufio.go:144 +0x5d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/websocket.(*Conn).read(0xc000814160, 0xc000c681d8?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/websocket@v1.5.0/conn.go:371 +0x2c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/websocket.(*Conn).advanceFrame(0xc000814160)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/websocket@v1.5.0/conn.go:809 +0x7b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/websocket.(*Conn).NextReader(0xc000814160)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/websocket@v1.5.0/conn.go:1009 +0xc5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/websocket.(*Conn).ReadMessage(0x19608a0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/websocket@v1.5.0/conn.go:1093 +0x19
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).getEvents.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd_events.go:108 +0x68
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/client.(*ProtocolLXD).getEvents
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd_events.go:106 +0x4e5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2427 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/events.(*websockListenerConnection).Reader(0xc0014185b8, {0x1f6ec28?, 0xc000be6180?}, 0xc00086aa10)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/events/connections.go:126 +0x3c5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/events.(*listenerCommon).start(0xc0010506e0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/events/common.go:36 +0x1e5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/events.(*Server).AddListener
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/events/events.go:97 +0x44c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2269 [IO wait, 308 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c2b8, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000a95400?, 0x18?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Accept(0xc000a95400)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:614 +0x22c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).accept(0xc000a95400)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_unix.go:172 +0x35
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*UnixListener).accept(0x44c080?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/unixsock_posix.go:166 +0x1c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*UnixListener).Accept(0xc0009ad6b0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/unixsock.go:260 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1070 +0x55
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1068 +0x20a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2314 [select, 40 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc001418e58, {0x1f6ec28, 0xc001628f00})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0xc00010e130?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2315 [select, 40 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc001418e88, {0x1f6ec28, 0xc001628f00})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0xc000ea6280?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2402 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba030, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2403 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba060, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2404 [select, 40 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba090, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2405 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba0c0, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2406 [select, 40 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba0f0, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2407 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba120, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2408 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba150, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2409 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba180, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2410 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba1b0, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2411 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba1e0, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2412 [select, 640 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba210, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2413 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0007ba270, {0x1f6ec28, 0xc0007be040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/task.go:68 +0x15f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/task/group.go:58 +0x2f3
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 2429 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00c498, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc00039a500?, 0xc00105a700?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc00039a500, {0xc00105a700, 0x675, 0x675})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc00039a500, {0xc00105a700?, 0xc001380220?, 0xc00105a705?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000140020, {0xc00105a700?, 0xc0000138c0?, 0x74753a?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*atLeastReader).Read(0xc00272ade0, {0xc00105a700?, 0x0?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:785 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bytes.(*Buffer).ReadFrom(0xc0004a6cf8, {0x1f5dba0, 0xc00272ade0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bytes/buffer.go:204 +0x98
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readFromUntil(0xc0004a6a80, {0x1f646e0?, 0xc000140020}, 0x675?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:807 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecordOrCCS(0xc0004a6a80, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:614 +0x116
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecord(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:582
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).Read(0xc0004a6a80, {0xc0014ff000, 0x1000, 0x626cdb3949d7b?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1285 +0x16f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Reader).fill(0xc000864f00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/bufio.go:106 +0x103
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bufio.(*Reader).Peek(0xc000864f00, 0x2)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bufio/bufio.go:144 +0x5d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/websocket.(*Conn).read(0xc0001754a0, 0xc0014ff006?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/websocket@v1.5.0/conn.go:371 +0x2c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/websocket.(*Conn).advanceFrame(0xc0001754a0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/websocket@v1.5.0/conn.go:809 +0x7b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/websocket.(*Conn).NextReader(0xc0001754a0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/websocket@v1.5.0/conn.go:1009 +0xc5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/websocket.(*Conn).ReadJSON(0xc00090ea48?, {0x19359e0, 0xc000c0b920})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/websocket@v1.5.0/json.go:50 +0x27
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/events.(*websockListenerConnection).Reader.func3()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/events/connections.go:88 +0xe7
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/events.(*websockListenerConnection).Reader
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/events/connections.go:82 +0x21b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 3176 [syscall]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x23, 0xc000f6fc00, 0x7f3980019210, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x833520?, 0x200000003?, 0xc000833520?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x23, 0x3, 0x2, 0xc0010c7ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00003ceb0, 0xc000c8d620?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 3307 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39cc00bb38, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc000419b00?, 0xc00060ca00?, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc000419b00, {0xc00060ca00, 0x60c, 0x60c})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*netFD).Read(0xc000419b00, {0xc00060ca00?, 0xc00111b320?, 0xc00060ca05?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/fd_posix.go:55 +0x29
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net.(*conn).Read(0xc000398348, {0xc00060ca00?, 0x60c?, 0x60c?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/net.go:183 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*atLeastReader).Read(0xc002178678, {0xc00060ca00?, 0x0?, 0x7f39cc424640?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:785 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bytes.(*Buffer).ReadFrom(0xc000443778, {0x1f5dba0, 0xc002178678})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bytes/buffer.go:204 +0x98
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readFromUntil(0xc000443500, {0x1f646e0?, 0xc000398348}, 0x60c?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:807 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecordOrCCS(0xc000443500, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:614 +0x116
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecord(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:582
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).Read(0xc000443500, {0xc000aaa660, 0x9, 0xc001002304?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1285 +0x16f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: io.ReadAtLeast({0x1f5db60, 0xc000443500}, {0xc000aaa660, 0x9, 0x9}, 0x9)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/io/io.go:331 +0x9a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: io.ReadFull(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/io/io.go:350
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.http2readFrameHeader({0xc000aaa660?, 0x9?, 0xc000f4e8d0?}, {0x1f5db60?, 0xc000443500?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:1566 +0x6e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*http2Framer).ReadFrame(0xc000aaa620)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:1830 +0x95
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*http2serverConn).readFrames(0xc0003eec00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:4469 +0x91
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by net/http.(*http2serverConn).serve
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:4575 +0x535
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 215479 [syscall, 189 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x5b, 0xc000d04800, 0x7f395002dbd0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x833a00?, 0x200000003?, 0xc000833a00?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x5b, 0x3, 0x2, 0xc001597ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc001ade870, 0x90?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384625 [chan receive]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*persistConn).addTLS(0xc001094d80, {0x1f6ec60?, 0xc000128000}, {0xc00114bec0, 0xd}, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1543 +0x365
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).dialConn(0xc000437040, {0x1f6ec60, 0xc000128000}, {{}, 0x0, {0xc001634270, 0x5}, {0xc00114bec0, 0x11}, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1617 +0x9e5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).dialConnFor(0x1f8ae70?, 0xc0008682c0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1449 +0xb0
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by net/http.(*Transport).queueForDial
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1418 +0x3d2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384642 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39b459e480, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc0010050e0?, 0xc001646000?, 0x1)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc0010050e0, {0xc001646000, 0x205, 0x205})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os.(*File).read(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/file_posix.go:31
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os.(*File).Read(0xc00080e048, {0xc001646000?, 0xc001646000?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/file.go:119 +0x5e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/mdlayher/socket.(*Conn).Read(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/mdlayher/socket@v0.2.3/conn.go:82
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/mdlayher/vsock.(*Conn).Read(0xc000bbe318, {0xc001646000?, 0xc00153b5f8?, 0x64f625?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/mdlayher/vsock@v1.1.1/vsock.go:230 +0x31
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*atLeastReader).Read(0xc000bbe330, {0xc001646000?, 0x0?, 0xc00153b630?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:785 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bytes.(*Buffer).ReadFrom(0xc0002b5af8, {0x1f5dba0, 0xc000bbe330})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bytes/buffer.go:204 +0x98
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readFromUntil(0xc0002b5880, {0x7f39cc388018?, 0xc000bbe318}, 0x64e4e6?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:807 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecordOrCCS(0xc0002b5880, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:614 +0x116
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecord(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:582
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readHandshake(0xc0002b5880)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1017 +0x6d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).clientHandshake(0xc0002b5880, {0x1f6ec28, 0xc000f20040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/handshake_client.go:179 +0x249
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).handshakeContext(0xc0002b5880, {0x1f6ec60, 0xc000128000})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1460 +0x32f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).HandshakeContext(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1403
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*persistConn).addTLS.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1537 +0x71
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by net/http.(*persistConn).addTLS
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1533 +0x345
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 264225 [syscall, 382 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x61, 0xc000c6b000, 0x7f39c401ed00, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x2208b60?, 0x200000003?, 0xc002208b60?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x61, 0x3, 0x2, 0xc00072dec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000faa640, 0x280?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 219674 [syscall, 157 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x5d, 0xc001ac4800, 0x7f38b0002490, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x14a24e0?, 0x200000003?, 0xc0014a24e0?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x5d, 0x3, 0x2, 0xc001acbec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000e93270, 0xc000dff9b0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 56142 [syscall, 206 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x3e, 0xc000c6a800, 0x7f38dc008720, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x682d00?, 0x200000003?, 0xc000682d00?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x3e, 0x3, 0x2, 0xc000669ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00154a7d0, 0xc000bf5bc0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 149010 [syscall, 189 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x4c, 0xc000be3c00, 0x7f38cc035490, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0xdc3040?, 0x200000003?, 0xc000dc3040?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x4c, 0x3, 0x2, 0xc000e3dec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000b7f090, 0xc00167eab0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 200452 [syscall, 173 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x57, 0xc000976c00, 0x7f38b800a540, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x1b1e000?, 0x200000003?, 0xc001b1e000?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x57, 0x3, 0x2, 0xc001acfec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc001264ff0, 0xc00189c390?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 4958 [syscall]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x2b, 0xc000f6e000, 0x7f38d8015d60, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x139d520?, 0x200000003?, 0xc00139d520?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x2b, 0x3, 0x2, 0xc001af1ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000fda050, 0xc000e12870?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 5277 [syscall, 4 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x2e, 0xc000872400, 0x7f3850001f00, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x164e000?, 0x200000003?, 0xc00164e000?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x2e, 0x3, 0x2, 0xc00066bec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc0018000f0, 0xc000a016b0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 72722 [syscall, 244 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x41, 0xc0028ab000, 0x7f39a402c830, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x2209d40?, 0x200000003?, 0xc002209d40?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x41, 0x3, 0x2, 0xc00157bec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000115360, 0xc000a0cc00?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 20669 [syscall]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x35, 0xc000be2000, 0x7f38e4006580, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x1374000?, 0x200000003?, 0xc001374000?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x35, 0x3, 0x2, 0xc00066fec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00087d220, 0xc001b81d40?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 36456 [syscall, 214 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x39, 0xc000f6ec00, 0x7f3858035fd0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x13756c0?, 0x200000003?, 0xc0013756c0?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x39, 0x3, 0x2, 0xc00072bec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00156c140, 0xc0012c3400?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 25042 [syscall]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x37, 0xc000f07c00, 0x7f39440579a0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x1375ba0?, 0x200000003?, 0xc001375ba0?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x37, 0x3, 0x2, 0xc001acdec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000a0aa00, 0x90?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 192553 [syscall, 183 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x55, 0xc000976800, 0x7f387000ad70, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x1083a00?, 0x200000003?, 0xc001083a00?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x55, 0x3, 0x2, 0xc000e3fec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00003d400, 0xc0006ef780?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 317868 [syscall, 108 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x69, 0xc001ac4400, 0x7f3908000ff0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x2067860?, 0x200000003?, 0xc002067860?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x69, 0x3, 0x2, 0xc0006e7ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000faac80, 0xc001f99bc0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 46053 [syscall, 327 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x3b, 0xc000c6a400, 0x7f38ec006130, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x2543380?, 0x200000003?, 0xc002543380?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x3b, 0x3, 0x2, 0xc000d6bec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000b34870, 0xc000eb96b0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384643 [runnable]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/go-lxc._Cfunc_go_lxc_get_cgroup_item(0x7f388802fdc0, 0x7f3868003800)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:705 +0x4d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/go-lxc.(*Container).cgroupItem.func2(0x7f3868003800?, 0x13?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/lxc/go-lxc@v0.0.0-20220627182551-ad3d9f7cb822/container.go:977 +0x4d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/go-lxc.(*Container).cgroupItem(0xc000f743c0, {0x1c1acec?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/lxc/go-lxc@v0.0.0-20220627182551-ad3d9f7cb822/container.go:977 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/go-lxc.(*Container).CgroupItem(0x0?, {0x1c1acec?, 0x4?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/lxc/go-lxc@v0.0.0-20220627182551-ad3d9f7cb822/container.go:1006 +0xa6
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*lxcCgroupReadWriter).Get(0x1962520?, 0xc0001e3e90?, {0x1bee095?, 0x6?}, {0x1c1acec?, 0xc00140e400?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:6796 +0x112
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/cgroup.(*CGroup).GetMemorySwapUsage(0xc000bbe348)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cgroup/abstraction.go:609 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Metrics(0xc001a34d80)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:6930 +0xd1c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ac28, 0xc001a34d80})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:189 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.metricsGet
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:186 +0xe65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384543 [runnable]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/go-lxc._Cfunc_go_lxc_get_cgroup_item(0x7f388801a980, 0x7f3900022af0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:705 +0x4d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/go-lxc.(*Container).cgroupItem.func2(0x7f3900022af0?, 0xa?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/lxc/go-lxc@v0.0.0-20220627182551-ad3d9f7cb822/container.go:977 +0x4d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/go-lxc.(*Container).cgroupItem(0xc000c33e60, {0x1bfe58d?, 0xc001002520?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/lxc/go-lxc@v0.0.0-20220627182551-ad3d9f7cb822/container.go:977 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/go-lxc.(*Container).CgroupItem(0x20000003a?, {0x1bfe58d?, 0xc00005d900?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/lxc/go-lxc@v0.0.0-20220627182551-ad3d9f7cb822/container.go:1006 +0xa6
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*lxcCgroupReadWriter).Get(0x1962520?, 0xc0001e3e90?, {0x1bee095?, 0x6?}, {0x1bfe58d?, 0xc001a6e000?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:6796 +0x112
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/cgroup.(*CGroup).GetMemoryLimit(0xc000fa02b8)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cgroup/abstraction.go:112 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/cgroup.(*CGroup).GetEffectiveMemoryLimit(0xc001a34f00?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cgroup/abstraction.go:137 +0x8d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Metrics(0xc001a34f00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:6859 +0x25b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ac28, 0xc001a34f00})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:189 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.metricsGet
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:186 +0xe65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 294236 [syscall, 125 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x63, 0xc0008ca800, 0x7f397001f3b0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0xd52b60?, 0x200000003?, 0xc000d52b60?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x63, 0x3, 0x2, 0xc00093dec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc001f19450, 0x200?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 213607 [syscall, 153 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x5a, 0xc002387000, 0x7f38bc00a7e0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x1b46ea0?, 0x200000003?, 0xc001b46ea0?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x5a, 0x3, 0x2, 0xc000d6fec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00133d220, 0x1c0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384575 [runnable]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: syscall.Syscall(0x3, 0x76, 0x0, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/syscall/asm_linux_amd64.s:20 +0x5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: syscall.Close(0x7f39b4051a00?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/syscall/zsyscall_linux_amd64.go:295 +0x30
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).destroy(0xc001018000)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:84 +0x51
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).decref(0x7f39b4051a00?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_mutex.go:213 +0x53
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Close(0xc001018000)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:107 +0x4f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os.(*file).close(0xc001018000)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/file_unix.go:252 +0xad
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os.(*File).Close(0xc00010e130?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/file_posix.go:25 +0x25
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os.ReadFile({0x1c113c9?, 0x7f39cf038600?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/file.go:705 +0x2b7
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/cgroup.(*CGroup).GetIOStats(0xc000b5e768)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/cgroup/abstraction.go:927 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Metrics(0xc001a34c00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:6960 +0x1550
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ac28, 0xc001a34c00})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:189 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.metricsGet
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:186 +0xe65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384283 [select]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).getConn(0xc000437040, 0xc0012a1000, {{}, 0x0, {0xc001634270, 0x5}, {0xc00114bec0, 0x11}, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1375 +0x5c6
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).roundTrip(0xc000437040, 0xc001260f00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:581 +0x76f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).RoundTrip(0x0?, 0x1f647e0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/roundtrip.go:17 +0x19
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.send(0xc001260f00, {0x1f647e0, 0xc000437040}, {0x1ba7e40?, 0x51bf01?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/client.go:252 +0x5d8
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Client).send(0xc001f7aa20, 0xc001260f00, {0x203000?, 0x203000?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/client.go:176 +0x9b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Client).do(0xc001f7aa20, 0xc001260f00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/client.go:725 +0x8f5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Client).Do(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/client.go:593
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).DoHTTP(0xc001a35200, 0xc000128000?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd.go:155 +0x5d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).rawQuery(0xc001a35200, {0x1be6c71, 0x3}, {0xc001634270, 0x19}, {0x0, 0x0}, {0x0, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd.go:293 +0x810
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).query(0xc001a35200, {0x1be6c71, 0x3}, {0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd.go:345 +0x145
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).queryStruct(0xc00127f290?, {0x1be6c71, 0x3}, {0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, {0x1935f20, ...})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd.go:349 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.(*ProtocolLXD).GetServer(0xc001a35200)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/lxd_server.go:21 +0x6a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.ConnectLXDHTTPWithContext({0x1f6ec60, 0xc000128000}, 0x0, 0xc001f7aa20)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/connection.go:130 +0x24e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/client.ConnectLXDHTTP(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/client/connection.go:94
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*qemu).getAgentMetrics(0xc0013ca6e0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_qemu.go:6376 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*qemu).Metrics(0xc0013ca6e0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_qemu.go:6364 +0x5f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ae70, 0xc0013ca6e0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:189 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.metricsGet
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:186 +0xe65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 505873 [syscall, 286 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x68, 0xc00235c000, 0x7f3998000f00, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x25004e0?, 0x200000003?, 0xc0025004e0?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x68, 0x3, 0x2, 0xc0011ddec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00156c370, 0xc000ac2090?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 101385 [syscall, 250 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x49, 0xc000f06c00, 0x7f391803ac70, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0xfc2820?, 0x200000003?, 0xc000fc2820?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x49, 0x3, 0x2, 0xc0011dbec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000d8c640, 0xc001da4810?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384607 [chan receive]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*persistConn).addTLS(0xc00136a480, {0x1f6ec60?, 0xc000128000}, {0xc00242ba40, 0xd}, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1543 +0x365
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).dialConn(0xc000590000, {0x1f6ec60, 0xc000128000}, {{}, 0x0, {0xc001328930, 0x5}, {0xc00242ba40, 0x11}, 0x0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1617 +0x9e5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*Transport).dialConnFor(0x1f8ae70?, 0xc0005302c0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1449 +0xb0
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by net/http.(*Transport).queueForDial
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1418 +0x3d2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 572672 [syscall, 14 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x6e, 0xc0028ab400, 0x7f385c0010d0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0xddc000?, 0x200000003?, 0xc000ddc000?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x6e, 0x3, 0x2, 0xc001577ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc0010a7950, 0x7?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 100047 [syscall, 19 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x47, 0xc000873400, 0x7f38640146e0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x1b1f380?, 0x200000003?, 0xc001b1f380?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x47, 0x3, 0x2, 0xc00066dec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc002848910, 0x7?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 716406 [syscall, 269 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x70, 0xc001f36000, 0x7f38a0006bb0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x194d6c0?, 0x200000003?, 0xc00194d6c0?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x70, 0x3, 0x2, 0xc00093fec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc001ce03c0, 0xc001bf7f50?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384558 [runnable]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/canonical/go-dqlite/internal/protocol.EncodeExecSQL(0xc0001ba7a0?, 0x0?, {0x1be9999?, 0x5?}, {0x0?, 0x0?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/canonical/go-dqlite@v1.11.5/internal/protocol/request.go:81 +0xf2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/canonical/go-dqlite/driver.(*Conn).ExecContext(0xc0001ba790, {0x1f6ec98, 0xc001d345a0}, {0x1be9999, 0x5}, {0x0?, 0x1?, 0xc0001ba790?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/canonical/go-dqlite@v1.11.5/driver/driver.go:382 +0xa5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/canonical/go-dqlite/driver.(*Conn).BeginTx(0xc0001ba790, {0x1f6ec98?, 0xc001d345a0?}, {0xc000ec8720?, 0x7d?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/canonical/go-dqlite@v1.11.5/driver/driver.go:464 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: database/sql.ctxDriverBegin({0x1f6ec98, 0xc001d345a0}, 0x0, {0x1f6b938, 0xc0001ba790})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/ctxutil.go:104 +0x7b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: database/sql.(*DB).beginDC.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:1884 +0xc5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: database/sql.withLock({0x1f671f0, 0xc000ba6240}, 0xc000ec8830)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:3437 +0x8c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: database/sql.(*DB).beginDC(0xc0009a5930, {0x1f6ec98, 0xc001d345a0}, 0xc000ba6240, 0xc001448060, 0x1f6ec60?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:1880 +0xcf
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: database/sql.(*DB).begin(0x0?, {0x1f6ec98, 0xc001d345a0}, 0xc000ec8928?, 0xca?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:1873 +0x94
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: database/sql.(*DB).BeginTx(0x1f6ec60?, {0x1f6ec98, 0xc001d345a0}, 0x7f39cc0692c8?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/database/sql/sql.go:1847 +0x7e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/db/query.Transaction({0x1f6ec60?, 0xc000128008?}, 0x742b25?, 0xc000ec8a58)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/db/query/transaction.go:18 +0x98
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/db/db.go:374 +0x7b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/db/query.Retry(0xc000ec8b28)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/db/query/retry.go:28 +0xba
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/db.(*Cluster).retry(0x7f39b404ef88?, 0xc000ec8b28)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/db/db.go:392 +0x4b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/db.(*Cluster).transaction(0xc000b2cdc0, {0x1f6ec60, 0xc000128008}, 0xc000ec8e40)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/db/db.go:368 +0x98
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/db.(*Cluster).Transaction(0x1b02780?, {0x1f6ec60?, 0xc000128008?}, 0x47d177?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/db/db.go:332 +0xb0
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/db.(*Cluster).InstanceList(0xc0014260f0?, 0xc000ec9128, {0xc000ec9168, 0x1, 0xc0003a1440?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/db/instances.go:241 +0x199
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.instanceLoadNodeProjectAll(0xc0006e0580, {0xc001ec839e, 0x2}, 0xffffffffffffffff)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance.go:405 +0x18d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet(0xc000400600, 0x418ee7?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:174 +0xc8b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.(*Daemon).createCmd.func1.3({0x1d1c980?, 0x1d1c5a8?, 0xa0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/daemon.go:695 +0xef
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.(*Daemon).createCmd.func1({0x1f6daf8, 0xc0021d81b8}, 0xc000db7d00)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/daemon.go:700 +0x1576
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.HandlerFunc.ServeHTTP(0xc000db7a00?, {0x1f6daf8?, 0xc0021d81b8?}, 0x1be6856?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:2084 +0x2f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/gorilla/mux.(*Router).ServeHTTP(0xc000628540, {0x1f6daf8, 0xc0021d81b8}, 0xc000db7900)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210 +0x1cf
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.(*lxdHttpServer).ServeHTTP(0xc00098b790, {0x1f6daf8, 0xc0021d81b8}, 0xc000db7900)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api.go:302 +0xdc
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.serverHandler.ServeHTTP({0x418ee7?}, {0x1f6daf8, 0xc0021d81b8}, 0xc000db7900)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:2916 +0x43b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.initALPNRequest.ServeHTTP({{0x1f6ecd0?, 0xc000c33ef0?}, 0xc000443500?, {0xc0001a0540?}}, {0x1f6daf8, 0xc0021d81b8}, 0xc000db7900)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/server.go:3523 +0x245
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*http2serverConn).runHandler(0x14fb052?, 0xc0019afe60?, 0xf?, 0xc000f12350?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:5906 +0x78
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by net/http.(*http2serverConn).processHeaders
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/h2_bundle.go:5636 +0x59b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 374811 [syscall, 88 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x66, 0xc000c6b400, 0x7f38e002f9b0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0xe65d40?, 0x200000003?, 0xc000e65d40?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x66, 0x3, 0x2, 0xc001579ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc002376d20, 0xc001cff650?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384489 [runnable]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._Cfunc_get_packet_address(0x7f398c029b20, 0xc0016f8800, 0x400)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:201 +0x4d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.NetnsGetifaddrs(0x7b37)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:192 +0xadc
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).networkState(0xc001a34a80)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:5907 +0xaf
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Metrics(0xc001a34a80)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:6983 +0x1e5b
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ac28, 0xc001a34a80})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:189 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.metricsGet
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:186 +0xe65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 245240 [syscall, 127 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x5f, 0xc000be2800, 0x7f38ac0039f0, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x8c5a00?, 0x200000003?, 0xc0008c5a00?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x5f, 0x3, 0x2, 0xc001ad1ec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc000a0bb80, 0xc000a87410?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 540844 [syscall, 390 minutes]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils._C2func_lxc_abstract_unix_recv_fds_iov(0x6b, 0xc001772400, 0x7f396c005b90, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011_cgo_gotypes.go:164 +0x57
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData.func1(0x1504b60?, 0x200000003?, 0xc001504b60?, 0x4)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x69
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared/netutils.AbstractUnixReceiveFdData(0x6b, 0x3, 0x2, 0xc000e3bec8?, 0x4f1426?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/netutils/network_linux_cgo.go:263 +0x85
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.(*Iovec).ReceiveSeccompIovec(0xc00187a320, 0xc00127eab0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:959 +0x49
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1.1()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1092 +0x187
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by github.com/lxc/lxd/lxd/seccomp.NewSeccompServer.func1
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/seccomp/seccomp.go:1075 +0x45
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384573 [runnable]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: encoding/pem.Decode({0xc002506d19, 0x2ae64, 0x2ae65})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/encoding/pem/pem.go:169 +0x685
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/x509.(*CertPool).AppendCertsFromPEM(0xc000b0bad0, {0xc002502000?, 0x136?, 0xc0001aac39?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/x509/cert_pool.go:209 +0x65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared.systemCertPool()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/network_unix.go:21 +0x65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared.finalizeTLSConfig(0xc0009de180, 0xc0014f8580)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/network.go:84 +0x37
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/shared.GetTLSConfigMem({0xc00149e300, 0x2f5}, {0xc0014cc120, 0x120}, {0x0, 0x0}, {0xc0001dd180, 0x316}, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/shared/network.go:173 +0x40e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/vsock.HTTPClient(0x25, 0x20fb, {0xc00149e300, 0x2f5}, {0xc0014cc120, 0x120}, {0xc0001dd180, 0x316})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/vsock/vsock.go:35 +0xa5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*qemu).getAgentClient(0xc0013cb1e0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_qemu.go:340 +0x18e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*qemu).getAgentMetrics(0xc0013cb1e0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_qemu.go:6371 +0x5c
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/lxc/lxd/lxd/instance/drivers.(*qemu).Metrics(0xc0013cb1e0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/instance/drivers/driver_qemu.go:6364 +0x5f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ae70, 0xc0013cb1e0})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:189 +0xa2
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by main.metricsGet
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:186 +0xe65
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1383100 [IO wait]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.runtime_pollWait(0x7f39b4633f30, 0x72)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/netpoll.go:302 +0x89
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).wait(0xc00184a360?, 0xc000e5a900?, 0x1)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:83 +0x32
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*pollDesc).waitRead(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_poll_runtime.go:88
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: internal/poll.(*FD).Read(0xc00184a360, {0xc000e5a900, 0x205, 0x205})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/internal/poll/fd_unix.go:167 +0x25a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os.(*File).read(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/file_posix.go:31
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: os.(*File).Read(0xc00152c2a0, {0xc000e5a900?, 0xc000e5a900?, 0x0?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/os/file.go:119 +0x5e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/mdlayher/socket.(*Conn).Read(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/mdlayher/socket@v0.2.3/conn.go:82
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: github.com/mdlayher/vsock.(*Conn).Read(0xc002178a38, {0xc000e5a900?, 0xc0010cb5f8?, 0x7f39cf15efff?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/.go/pkg/mod/github.com/mdlayher/vsock@v1.1.1/vsock.go:230 +0x31
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*atLeastReader).Read(0xc002178a68, {0xc000e5a900?, 0x0?, 0xcb630?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:785 +0x3d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: bytes.(*Buffer).ReadFrom(0xc0012c9778, {0x1f5dba0, 0xc002178a68})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/bytes/buffer.go:204 +0x98
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readFromUntil(0xc0012c9500, {0x7f39cc388018?, 0xc002178a38}, 0x0?)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:807 +0xe5
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecordOrCCS(0xc0012c9500, 0x0)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:614 +0x116
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readRecord(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:582
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).readHandshake(0xc0012c9500)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1017 +0x6d
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).clientHandshake(0xc0012c9500, {0x1f6ec28, 0xc001828040})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/handshake_client.go:179 +0x249
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).handshakeContext(0xc0012c9500, {0x1f6ec60, 0xc000128000})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1460 +0x32f
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: crypto/tls.(*Conn).HandshakeContext(...)
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/crypto/tls/conn.go:1403
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: net/http.(*persistConn).addTLS.func2()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1537 +0x71
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: created by net/http.(*persistConn).addTLS
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/net/http/transport.go:1533 +0x345
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[601548]: => LXD failed with return code 2
Nov 17 03:10:00 us2204-iph-lxd03 systemd[1]: snap.lxd.daemon.service: Main process exited, code=exited, status=1/FAILURE
Nov 17 03:10:00 us2204-iph-lxd03 systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.
Nov 17 03:10:00 us2204-iph-lxd03 systemd[1]: snap.lxd.daemon.service: Scheduled restart job, restart counter is at 21.
Nov 17 03:10:00 us2204-iph-lxd03 systemd[1]: Stopped Service for snap application lxd.daemon.
tomponline commented 1 year ago

That stack trace has me confused because the line numbers for concurrent access don't correlate to operations on the map:

Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: fatal error: concurrent map read and map write
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: goroutine 1384626 [running]:
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: runtime.throw({0x1c45bfb?, 0x44?})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/panic.go:992 +0x71 fp=0xc000d69d40 sp=0xc000d69d10 pc=0x441411
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: runtime.mapaccess1_faststr(0xc001a34900?, 0xc0000bbde0?, {0xc000528077, 0x7})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/snap/go/9981/src/runtime/map_faststr.go:22 +0x3a5 fp=0xc000d69da8 sp=0xc000d69d40 pc=0x41e345
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func2({0x1f8ac28, 0xc001a34900})
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:199 +0x1ca fp=0xc000d69fc0 sp=0xc000d69da8 pc=0x165230a
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: main.metricsGet.func4()
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: #011/build/lxd/parts/lxd/src/lxd/api_metrics.go:200 +0x2e fp=0xc000d69fe0 sp=0xc000d69fc0 pc=0x165210e
Nov 17 03:10:00 us2204-iph-lxd03 lxd.daemon[602129]: runtime.goexit()

Are you sure this server is running LXD 5.7, without any sideloaded lxd.debug processes?

tomponline commented 1 year ago

I think I found the problem anyway https://github.com/tomponline/lxd/commit/fe5d1ff9b4b40599bb1045ec15902caa5f70476d

tomponline commented 1 year ago

I cant explain the network dropping issue. But could you turn off the metrics collector and see if it fixes it. That would at least tie the issue to the LXD crash I can see.

markrattray commented 1 year ago

Good morning and hope you had a good weekend.

How do you turn off the metrics collector?

I used lxc config set core.https_address ":8443" from: https://linuxcontainers.org/lxd/docs/master/metrics/#metrics

Thanks

tomponline commented 1 year ago

I meant stop scraping the endpoint

markrattray commented 1 year ago

Thanks for the quick response!!!

I've:

... fingers crossed we've got a lead.

Last time it was 10h30m to drop off the network, so hope you have a good day.

markrattray commented 1 year ago

Good morning

Good news this morning... you may have found a lead. So far all the VMs are still on the network, both Ubuntu and Windows VMs.

Thanks :)

tomponline commented 1 year ago

Excellent. Out of interest how many instances do you have running?

Hopefully the fix I put in for the metrics scrape crash (https://github.com/lxc/lxd/pull/11132/commits/fe5d1ff9b4b40599bb1045ec15902caa5f70476d) will get cherry-picked into latest/stable soon and you can see if that helps with LXD 5.8 when re-enabling the scrape.

markrattray commented 1 year ago

So far 18 virtual machines and 31 system containers on that one cluster node.

I will certainly give that fix a try, so if you could give me a nudge when it's in latest/stable and if you haven't heard from me, in case I miss its release. Thanks.

markrattray commented 1 year ago

Good morning

FYI everything still good here on 5.7-c62733b without Prometheus scraping the metrics, so thanks for your great ideas and work Tom!

tomponline commented 1 year ago

Ah excellent to hear. That commit has been merged now so hopefully @stgraber will include in the snap as a cherry-pick soon on the latest/stable channel, but will certainly be in there by LXD 5.9 next month. It will be interesting to see if that fixes the problem (it should certainly fix the crash at least) or whether the scrape itself was causing some sort of high load that was causing the problem. If so then I have an idea to limit the concurrency of the instance metrics collection internally to try and keep a lid on load.

tomponline commented 1 year ago

I'll close this for now, but do repost here if you continue to have issues with LXD 5.9 after enabling the scraping. Thanks

markrattray commented 1 year ago

Good morning

I'm sorry to say that all the VMs have lost network again but this is a different event.

The LBs lost TCP and HTTP connection with the test VM to monitor VM net outages, at Nov 29 13:23:54 EST. One LB is a system container on the same problematic physical host us2204-iph-lxd03 and the other LB is a system container on the backup physical host us2204-iph-lxd03

As you can see here it was the LXD 5.7 to 5.8 Snap auto update, grepping all syslog entries for 13h00:

@us2204-iph-lxd03:~$ grep "Nov 29 13" /var/log/syslog
Nov 29 13:17:01 us2204-iph-lxd03 CRON[706318]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Nov 29 13:23:32 us2204-iph-lxd03 snapd[3711508]: storehelpers.go:748: cannot refresh: snap has no updates available: "core20", "snapd"
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: Reloading.
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: Starting Daily apt download activities...
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: Starting Refresh fwupd metadata and update motd...
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: Mounting Mount unit for lxd, revision 23983...
Nov 29 13:23:48 us2204-iph-lxd03 kernel: [2840725.392405] loop4: detected capacity change from 0 to 279848
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: Mounted Mount unit for lxd, revision 23983.
Nov 29 13:23:48 us2204-iph-lxd03 dbus-daemon[3089]: [system] Activating via systemd: service name='org.freedesktop.fwupd' unit='fwupd.service' requested by ':1.746' (uid=113 pid=741882 comm="/usr/bin/fwupdmgr refresh " label="unconfined")
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: Starting Firmware update daemon...
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: snap.lxd.daemon.unix.socket: Deactivated successfully.
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: Closed Socket unix for snap application lxd.daemon.
Nov 29 13:23:48 us2204-iph-lxd03 systemd[1]: Stopping Service for snap application lxd.daemon...
Nov 29 13:23:49 us2204-iph-lxd03 lxd.daemon[741920]: => Stop reason is: snap refresh
Nov 29 13:23:49 us2204-iph-lxd03 lxd.daemon[741920]: => Stopping LXD
Nov 29 13:23:49 us2204-iph-lxd03 lxd.daemon[1319685]: time="2022-11-29T13:23:49-05:00" level=warning msg="Could not handover member's responsibilities" err="Failed to transfer leadership: No online voter found"
Nov 29 13:23:49 us2204-iph-lxd03 systemd[1]: apt-daily.service: Deactivated successfully.
Nov 29 13:23:49 us2204-iph-lxd03 systemd[1]: Finished Daily apt download activities.
Nov 29 13:23:50 us2204-iph-lxd03 dbus-daemon[3089]: [system] Successfully activated service 'org.freedesktop.fwupd'
Nov 29 13:23:50 us2204-iph-lxd03 systemd[1]: Started Firmware update daemon.
Nov 29 13:23:50 us2204-iph-lxd03 systemd[1]: fwupd-refresh.service: Deactivated successfully.
Nov 29 13:23:50 us2204-iph-lxd03 systemd[1]: Finished Refresh fwupd metadata and update motd.
Nov 29 13:23:50 us2204-iph-lxd03 lxd.daemon[1319114]: => LXD exited cleanly
Nov 29 13:23:51 us2204-iph-lxd03 lxd.daemon[741920]: ==> Stopped LXD
Nov 29 13:23:51 us2204-iph-lxd03 systemd[1]: snap.lxd.daemon.service: Deactivated successfully.
Nov 29 13:23:51 us2204-iph-lxd03 systemd[1]: Stopped Service for snap application lxd.daemon.
Nov 29 13:23:51 us2204-iph-lxd03 systemd[1]: snap.lxd.user-daemon.unix.socket: Deactivated successfully.
Nov 29 13:23:51 us2204-iph-lxd03 systemd[1]: Closed Socket unix for snap application lxd.user-daemon.
Nov 29 13:23:51 us2204-iph-lxd03 snapd[3711508]: services.go:1066: RemoveSnapServices - socket snap.lxd.user-daemon.unix.socket
Nov 29 13:23:51 us2204-iph-lxd03 snapd[3711508]: services.go:1090: RemoveSnapServices - disabling snap.lxd.user-daemon.service
Nov 29 13:23:51 us2204-iph-lxd03 snapd[3711508]: services.go:1066: RemoveSnapServices - socket snap.lxd.daemon.unix.socket
Nov 29 13:23:51 us2204-iph-lxd03 snapd[3711508]: services.go:1090: RemoveSnapServices - disabling snap.lxd.daemon.service
Nov 29 13:23:51 us2204-iph-lxd03 snapd[3711508]: services.go:1090: RemoveSnapServices - disabling snap.lxd.activate.service
Nov 29 13:23:51 us2204-iph-lxd03 systemd[1]: Reloading.
Nov 29 13:24:13 us2204-iph-lxd03 kernel: [2840750.530578] audit: type=1400 audit(1669746253.962:752): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/snap/snapd/17576/usr/lib/snapd/snap-confine" pid=744250 comm="apparmor_parser"
Nov 29 13:24:13 us2204-iph-lxd03 kernel: [2840750.557587] audit: type=1400 audit(1669746253.990:753): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/snap/snapd/17576/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=744250 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 kernel: [2840750.693451] audit: type=1400 audit(1669746254.122:754): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.hook.install" pid=744258 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 kernel: [2840750.709515] audit: type=1400 audit(1669746254.138:755): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.buginfo" pid=744254 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 kernel: [2840750.710604] audit: type=1400 audit(1669746254.142:756): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxc-to-lxd" pid=744261 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 kernel: [2840750.710875] audit: type=1400 audit(1669746254.142:757): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxd" pid=744262 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 kernel: [2840750.710919] audit: type=1400 audit(1669746254.142:758): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.activate" pid=744252 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 kernel: [2840750.712269] audit: type=1400 audit(1669746254.142:759): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.migrate" pid=744263 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 kernel: [2840750.712441] audit: type=1400 audit(1669746254.142:760): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxc" pid=744260 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 kernel: [2840750.712501] audit: type=1400 audit(1669746254.142:761): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.hook.remove" pid=744259 comm="apparmor_parser"
Nov 29 13:24:14 us2204-iph-lxd03 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 29 13:24:15 us2204-iph-lxd03 systemd[1]: Listening on Socket unix for snap application lxd.user-daemon.
Nov 29 13:24:15 us2204-iph-lxd03 systemd[1]: Listening on Socket unix for snap application lxd.daemon.
Nov 29 13:24:15 us2204-iph-lxd03 systemd[1]: Starting Service for snap application lxd.activate...
Nov 29 13:24:15 us2204-iph-lxd03 lxd.activate[744345]: => Starting LXD activation
Nov 29 13:24:15 us2204-iph-lxd03 lxd.activate[744345]: ==> Loading snap configuration
Nov 29 13:24:15 us2204-iph-lxd03 lxd.activate[744345]: ==> Checking for socket activation support
Nov 29 13:24:15 us2204-iph-lxd03 lxd.activate[744345]: ==> Setting LXD socket ownership
Nov 29 13:24:15 us2204-iph-lxd03 lxd.activate[744345]: ==> Setting LXD user socket ownership
Nov 29 13:24:15 us2204-iph-lxd03 lxd.activate[744345]: ==> Checking if LXD needs to be activated
Nov 29 13:24:16 us2204-iph-lxd03 systemd[1]: Started Service for snap application lxd.daemon.
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: => Preparing the system (23983)
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Loading snap configuration
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Setting up mntns symlink (mnt:[4026535375])
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Setting up kmod wrapper
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Preparing /boot
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Preparing a clean copy of /run
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Preparing /run/bin
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Preparing a clean copy of /etc
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Preparing a clean copy of /usr/share/misc
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Setting up ceph configuration
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Setting up LVM configuration
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Setting up OVN configuration
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Rotating logs
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Setting up ZFS (2.1)
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Escaping the systemd cgroups
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ====> Detected cgroup V2
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Escaping the systemd process resource limits
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Disabling shiftfs on this kernel (auto)
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: => Re-using existing LXCFS
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Reloading LXCFS
Nov 29 13:24:16 us2204-iph-lxd03 lxd.daemon[744413]: ==> Cleaning up existing LXCFS namespace
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: Closed liblxcfs.so
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: Running destructor lxcfs_exit
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: Running constructor lxcfs_init to reload liblxcfs
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: mount namespace: 6
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: hierarchies:
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]:   0: fd:   8: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: Kernel supports pidfds
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: Kernel does not support swap accounting
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: api_extensions:
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - cgroups
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - sys_cpu_online
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - proc_cpuinfo
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - proc_diskstats
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - proc_loadavg
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - proc_meminfo
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - proc_stat
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - proc_swaps
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - proc_uptime
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - proc_slabinfo
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - shared_pidns
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - cpuview_daemon
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - loadavg_daemon
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: - pidfds
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[3549]: Reloaded LXCFS
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[744413]: => Starting LXD
Nov 29 13:24:17 us2204-iph-lxd03 lxd.daemon[744932]: time="2022-11-29T13:24:17-05:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
Nov 29 13:24:22 us2204-iph-lxd03 systemd[1]: snap.lxd.activate.service: Deactivated successfully.
Nov 29 13:24:22 us2204-iph-lxd03 systemd[1]: Finished Service for snap application lxd.activate.
Nov 29 13:24:22 us2204-iph-lxd03 systemd[1]: snap.lxd.activate.service: Consumed 1.308s CPU time.
Nov 29 13:24:22 us2204-iph-lxd03 systemd[1]: snap-lxd-23853.mount: Deactivated successfully.
Nov 29 13:24:22 us2204-iph-lxd03 systemd[1]: Reloading.
Nov 29 13:24:22 us2204-iph-lxd03 lxd.daemon[744413]: => LXD is ready
Nov 29 13:24:22 us2204-iph-lxd03 systemd[1]: Started snap.lxd.hook.configure.8b7328d8-5b72-46e6-ba99-40afe92418b8.scope.
Nov 29 13:24:23 us2204-iph-lxd03 systemd[1]: snap.lxd.hook.configure.8b7328d8-5b72-46e6-ba99-40afe92418b8.scope: Deactivated successfully.
Nov 29 13:24:23 us2204-iph-lxd03 snapd[3711508]: storehelpers.go:748: cannot refresh snap "lxd": snap has no updates available
Nov 29 13:24:26 us2204-iph-lxd03 lxd.daemon[744932]: time="2022-11-29T13:24:26-05:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=fb8beca6-e62b-48ab-b7b6-e0c41babb207 project= status=Success
Nov 29 13:24:27 us2204-iph-lxd03 lxd.daemon[744932]: time="2022-11-29T13:24:27-05:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=21bc62b8-be4f-4832-bf3c-6cf66b866523 project= status=Success
Nov 29 13:24:27 us2204-iph-lxd03 lxd.daemon[744932]: time="2022-11-29T13:24:27-05:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=dbf09f45-8d0b-4e70-bb92-a02e697fe4ab project= status=Success

LXD was updated to:

@us2204-iph-lxd03:~$ snap info lxd
  ...
services:
  lxd.activate:    oneshot, enabled, inactive
  lxd.daemon:      simple, enabled, active
  lxd.user-daemon: simple, enabled, inactive
snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     latest/stable
refresh-date: 2 days ago, at 13:24 EST
channels:
  latest/stable:    5.8-bb9c9b1   2022-11-25 (23983) 143MB -
  ...
installed:          5.8-bb9c9b1              (23983) 143MB -

This is all the entries in the daemon log:

@us2204-iph-lxd03:# cat /var/snap/lxd/common/lxd/logs/lxd.log
time="2022-11-29T13:24:17-05:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
time="2022-11-29T13:24:26-05:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=fb8beca6-e62b-48ab-b7b6-e0c41babb207 project= status=Success
time="2022-11-29T13:24:27-05:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=21bc62b8-be4f-4832-bf3c-6cf66b866523 project= status=Success
time="2022-11-29T13:24:27-05:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=dbf09f45-8d0b-4e70-bb92-a02e697fe4ab project= status=Success

This is all the entries from dmesg for 13h00:

@us2204-iph-lxd:# dmesg -T | grep "Nov 29 13"
[Tue Nov 29 13:22:44 2022] loop4: detected capacity change from 0 to 279848
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746253.962:752): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/snap/snapd/17576/usr/lib/snapd/snap-confine" pid=744250 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746253.990:753): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/snap/snapd/17576/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=744250 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746254.122:754): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.hook.install" pid=744258 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746254.138:755): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.buginfo" pid=744254 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746254.142:756): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxc-to-lxd" pid=744261 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746254.142:757): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxd" pid=744262 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746254.142:758): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.activate" pid=744252 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746254.142:759): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.migrate" pid=744263 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746254.142:760): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxc" pid=744260 comm="apparmor_parser"
[Tue Nov 29 13:23:09 2022] audit: type=1400 audit(1669746254.142:761): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.hook.remove" pid=744259 comm="apparmor_parser"

I cannot give you QEMU logs for the VMs because they go missing.

Interestingly, there is 1 VM on the LXD standalone backup server (us2204-iph-lxd04) and that was refreshed 6 days ago and the solitary VM still has network.

@us2204-iph-lxd04:~$ snap info lxd
  ...
commands:
  - lxd.benchmark
  - lxd.buginfo
  - lxd.check-kernel
  - lxd.lxc
  - lxd.lxc-to-lxd
  - lxd
  - lxd.migrate
services:
  lxd.activate:    oneshot, enabled, inactive
  lxd.daemon:      simple, enabled, active
  lxd.user-daemon: simple, enabled, inactive
snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     latest/stable
refresh-date: 6 days ago, at 07:01 UTC
channels:
  latest/stable:    5.8-bb9c9b1   2022-11-25 (23983) 143MB -
  ...
installed:          5.8-bb9c9b1              (23983) 143MB -

This backup server is also dedicated to LXD and is on an older generation Dell server. Both servers are dedicated to LXD., however this backup server (us2204-iph-lxd04). is actually struggling for resources compared to the one having the VM net disconnect issue (us2204-iph-lxd03). At the beginning of 2022, I replaced ESXi with Ubuntu 20.04 x86_64 and installed Dell OpenManage. I upgraded them to 22.04 in the last few months. I've kept them as vanilla as possible without installing many other packages unless needed, so the only thing that comes to mind is zfsutils-linux and the stuff for Dell OpenManage, so software wise they are almost identical.

So it's just VMs on this one host us2204-iph-lxd03... the LBs are not loosing connections to the many containers on it.

NICs on the problematic host us2204-iph-lxd03, and only 1 port is connected:

@ius01a-lphlc103:# lspci | grep -i ethernet
01:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
01:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe

NICs on the backup host us2204-iph-lxd04, and only 1 port is connected:

@us2204-iph-lxd04:~$ lspci | grep -i ethernet
01:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
01:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)

Thanks

tomponline commented 1 year ago

Can you clarify what you mean by "dropping off the network"? What is the ip a output on the affected LXD hosts and inside the VMs?

Does lxc exec still work to get into the instances?

Can you also describe what the load balancers are?

markrattray commented 1 year ago

Thanks for the quick response.

dropping off the network = VMs loose their IP address. The link in the VM appears to be "up" but nothing working on it and cannot renew DHCP from the ISC DHCP server on the LAN.

lxc exec works fine into the instances.

Host us2204-iph-lxd03 seems fine and PuTTY SSH sessions with the problematic host from another VM still on ESXi remain connected without the likes of screen.

Here is a partial lxc list in the "development" project. All the VMs have lost the primary NIC's IP. The Docker Swarm VMs (dkr) only show the default networking created by Docker (172.18.0.1). System Containers are fine. The MS Windows mw2022 doesn't show any IP normally. The rest shown us2204 are Ubuntu Server 22.04 x86_64 instances.

+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| us2204-dct-dfp01 | RUNNING | 192.169.0.164 (eth0)         |      | CONTAINER       | 0         | us2204-iph-lxd03.domain.tld |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| us2204-dvm-dkr01 | RUNNING | 172.18.0.1 (docker_gwbridge) |      | VIRTUAL-MACHINE | 0         | us2204-iph-lxd03.domain.tld |
|                  |         | 172.17.0.1 (docker0)         |      |                 |           |                             |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| us2204-dvm-dkr02 | RUNNING | 172.18.0.1 (docker_gwbridge) |      | VIRTUAL-MACHINE | 0         | us2204-iph-lxd03.domain.tld |
|                  |         | 172.17.0.1 (docker0)         |      |                 |           |                             |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| us2204-dct-elk01 | RUNNING | 192.169.0.175 (eth0)         |      | CONTAINER       | 0         | us2204-iph-lxd03.domain.tld |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| us2204-dct-elk02 | RUNNING | 192.169.0.176 (eth0)         |      | CONTAINER       | 0         | us2204-iph-lxd03.domain.tld |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| us2204-dct-elk03 | RUNNING | 192.169.0.177 (eth0)         |      | CONTAINER       | 0         | us2204-iph-lxd03.domain.tld |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| us2204-dvm-fsg01 | RUNNING |                              |      | VIRTUAL-MACHINE | 0         | us2204-iph-lxd03.domain.tld |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| us2204-dvm-fsg03 | RUNNING |                              |      | VIRTUAL-MACHINE | 0         | us2204-iph-lxd03.domain.tld |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+
| mw2022-dvm-mad01 | RUNNING |                              |      | VIRTUAL-MACHINE | 0         | us2204-iph-lxd03.domain.tld |
+------------------+---------+------------------------------+------+-----------------+-----------+-----------------------------+

Host us2204-iph-lxd03 connected port result of ip a:

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 10000
    link/ether 90:b1:1c:2a:a1:d4 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
    inet 192.168.0.57/24 brd 192.168.0.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::92b1:1cff:fe2a:a1d4/64 scope link
       valid_lft forever preferred_lft forever

VM us2204-dvm-fsg01 on host us2204-iph-lxd03 results of ip a. The DHCP server is ISC DHCPD running as a system container on this same host:

root@us2204-iph-lxd03:# lxc exec us2204-dvm-fsg01 bash

root@us2204-dvm-fsg01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:16:3e:c1:5f:52 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::216:3eff:fec1:5f52/64 scope link
       valid_lft forever preferred_lft forever

root@us2204-dvm-fsg01:~# dhclient -r

root@us2204-dvm-fsg01:~# dhclient
....(zzzZZzzz)....
root@us2204-dvm-fsg01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:16:3e:c1:5f:52 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::216:3eff:fec1:5f52/64 scope link
       valid_lft forever preferred_lft forever

The Load Balancers are Pulse Secure Virtual Traffic Managers which I deployed as system containers. They are only relevant here because they are testing a TCP and HTTP connection to a VM on this us2204-iph-lxd03 host. The VM only has NGinx running so the LBs can test and alert me when they loose connection to the VM, which they do perfectly. There are 2 LB instances, one on the problematic host us2204-iph-lxd03 and the other on the backup standalone host us2204-iph-lxd04.

In the VM, there are no dmesg entries for the 29th, and these are the only entries in /var/log/syslog:

root@us2204-dvm-fsg01:~# grep "Nov 29 13" /var/log/syslog
Nov 29 13:17:01 us2204-dvm-fsg01 CRON[82696]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Nov 29 13:24:26 us2204-dvm-fsg01 systemd-resolved[432]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 192.168.0.12.
Nov 29 13:24:34 us2204-dvm-fsg01 systemd-resolved[432]: Using degraded feature set TCP instead of UDP for DNS server 192.168.0.12.
.... (repeated)...
tomponline commented 1 year ago

Do you still see the macvlan interface on the host, e.g. on my system I see something like macc4623824@eno1 where eno1 is the parent interface for each running VM?

Also when this occurs, what is the output of sudo iptables-save and sudo nft list-ruleset?

markrattray commented 1 year ago

Good afternoon and hope you had a good weekend.

Yes, if you revisit the "Issue description", the interfaces do remain on the host, which is why I need to use ip link delete macc4623824 so that the MAC address can be assigned to a new interface on power-on....mainly with Windows VMs but had an Ubuntu VM do the same on a rare occasion... normally Ubuntu VMs recover on power-on and generates a new interface and reassigning the MAC to it fine, but Windows VM's so far always need me to use the ip link delete command before I can start them to get them back on the network.

Ubuntu Server 22.04 VM from images remote repo, currently in this state:

# lxc config show us2204-dvm-dkr01
    ...
    volatile.eth0.host_name: maca42e0d83

# ip link | grep maca42e0d83
    137: maca42e0d83@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500

iptables

root@us2204-iph-lxd03:# sudo iptables-save
    # Warning: iptables-legacy tables present, use iptables-legacy-save to see them

root@us2204-iph-lxd03:# sudo iptables-legacy-save
    # Generated by iptables-save v1.8.7 on Mon Dec  5 08:51:46 2022
    *raw
    :PREROUTING ACCEPT [319950341:60061166398]
    :OUTPUT ACCEPT [161759947:73792541050]
    COMMIT
    # Completed on Mon Dec  5 08:51:46 2022
    # Generated by iptables-save v1.8.7 on Mon Dec  5 08:51:46 2022
    *mangle
    :PREROUTING ACCEPT [319950341:60061166398]
    :INPUT ACCEPT [176002707:48974799735]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [161759947:73792541050]
    :POSTROUTING ACCEPT [161759947:73792541050]
    COMMIT
    # Completed on Mon Dec  5 08:51:46 2022
    # Generated by iptables-save v1.8.7 on Mon Dec  5 08:51:46 2022
    *nat
    :PREROUTING ACCEPT [0:0]
    :INPUT ACCEPT [0:0]
    :OUTPUT ACCEPT [0:0]
    :POSTROUTING ACCEPT [0:0]
    COMMIT
    # Completed on Mon Dec  5 08:51:46 2022
    # Generated by iptables-save v1.8.7 on Mon Dec  5 08:51:46 2022
    *filter
    :INPUT ACCEPT [176002707:48974799735]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [161759947:73792541050]
    COMMIT
    # Completed on Mon Dec  5 08:51:46 2022

nft ruleset

# sudo nft list-ruleset
    Error: syntax error, unexpected newline, expecting string
        list-ruleset
                ^
# sudo nft list ruleset

Thanks

tomponline commented 1 year ago

OK so I still dont think I understand the issue sufficiently.

You're saying that the VM loses connectivity, but the mac interface remains on the host and the interface remains up inside the VM.

So far makes sense I think.

But I don't understand why you need to manually delete the mac interface on the host? Are you stopping the VM first? Are you saying the interface doesn't get cleaned up on stop?

markrattray commented 1 year ago

You're saying that the VM loses connectivity, but the mac interface remains on the host and the interface remains up inside the VM.

Yes, link seems up but the link is dead; it cannot get anything on the LAN and vice versa. If I lxc exec into the VM instance:

root@us2204-dvm-dkr01:~# ping 192.168.0.10
    ping: connect: Network is unreachable

root@us2204-dvm-dkr01:~# ip link
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 00:16:3e:cc:ea:86 brd ff:ff:ff:ff:ff:ff
    3: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
        link/ether 02:42:2d:fa:b9:7b brd ff:ff:ff:ff:ff:ff
    4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
        link/ether 02:42:42:ab:ce:6c brd ff:ff:ff:ff:ff:ff
    10: vethc3bc285@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default
        link/ether ae:35:65:2a:d4:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 1

But I don't understand why you need to manually delete the mac interface on the host? Are you stopping the VM first? Are you saying the interface doesn't get cleaned up on stop?

The only way I know how to get the VM back onto the network is to:

For a Windows VM still in this state from the previous Snap update, just powered off via the LXD Console / Windows GUI, and then started via lxc start.

# lxc start mw2022-ivm-mad01 --project default
    Error: Failed to start device "eth0": Failed to set the MAC address: Failed to run: ip link set dev mac4fb5ba46 address 00:16:3e:d9:88:f1: exit status 2 (RTNETLINK answers: Address already in use))

# ip link | grep -B 1 "00:16:3e:d9:88:f1"
    151: mac9c74b056@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:d9:88:f1 brd ff:ff:ff:ff:ff:ff

# ip link delete mac9c74b056

# lxc start mw2022-ivm-mad01 --project default

# lxc list --project default | grep mad
    | mw2022-ivm-mad01        | RUNNING |                              |      | VIRTUAL-MACHINE | 0         | us2204-iph-lxd03.domain.tld |
tomponline commented 1 year ago

OK so the next time this happens please show the output of ip l on the host and lxc config show <instance> --expanded for affected instances before you stop/reboot them.

We can then compare the ip l list to the volatile.eth0.host_name config entries and see if they match up. If not it suggests to me that something is altering the host side interface name, which may explain both the connection dropping and the lack of cleanup on shut down.

markrattray commented 1 year ago

It appears that the devices are there in the list. I have several VMs in this state still. Here are 2, one Ubuntu and the other Windows. I will probably have to delete the device mace1713df6 for the Windows VM to start up. Not sure on the Ubuntu one but it's certainly not communicating on the network as it's lost it's IP on the eth0 interface.

# ip l
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 10000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 10000
        link/ether 90:b1:1c:2a:a1:d4 brd ff:ff:ff:ff:ff:ff
        altname enp1s0f0
    3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 10000
        link/ether 90:b1:1c:2a:a1:d5 brd ff:ff:ff:ff:ff:ff
        altname enp1s0f1
    4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 10000
        link/ether 90:b1:1c:2a:a1:d6 brd ff:ff:ff:ff:ff:ff
        altname enp2s0f0
    5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 10000
        link/ether 90:b1:1c:2a:a1:d7 brd ff:ff:ff:ff:ff:ff
        altname enp2s0f1
    6: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether e0:db:55:06:77:c3 brd ff:ff:ff:ff:ff:ff
    113: mac38af0240@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:90:94:89 brd ff:ff:ff:ff:ff:ff
--> 118: mace1713df6@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:8a:35:ed brd ff:ff:ff:ff:ff:ff
    120: mac7e5751f4@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:48:ed:af brd ff:ff:ff:ff:ff:ff
    130: mac83bcc53e@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:a5:13:0b brd ff:ff:ff:ff:ff:ff
    132: maccf88b6f4@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:1d:71:a5 brd ff:ff:ff:ff:ff:ff
    133: mac1a1709f2@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:f9:d2:d5 brd ff:ff:ff:ff:ff:ff
    134: macbb32a1cb@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:0c:57:c6 brd ff:ff:ff:ff:ff:ff
--> 137: maca42e0d83@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:cc:ea:86 brd ff:ff:ff:ff:ff:ff
    138: macc6b7e9a4@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:d8:f4:3c brd ff:ff:ff:ff:ff:ff
    139: mac05660c50@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:c1:5f:52 brd ff:ff:ff:ff:ff:ff
    140: macc15cacf7@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:a0:bf:dd brd ff:ff:ff:ff:ff:ff
    143: macf3522c51@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:5b:3a:88 brd ff:ff:ff:ff:ff:ff
    144: mac231cccb5@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:f3:dd:b2 brd ff:ff:ff:ff:ff:ff
    145: macc2b65fbc@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:f7:e4:ce brd ff:ff:ff:ff:ff:ff
    146: macda472719@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:12:55:8d brd ff:ff:ff:ff:ff:ff
    147: mac51fd35aa@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:1b:a0:f1 brd ff:ff:ff:ff:ff:ff
    152: mac731bd6e2@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:6a:a0:00 brd ff:ff:ff:ff:ff:ff
    154: mac2d955df9@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:d9:88:f1 brd ff:ff:ff:ff:ff:ff

Ubuntu Server 22.04 VM (images repo)

# lxc config show --expanded us2204-dvm-dkr01
    architecture: x86_64
    config:
      image.architecture: amd64
      image.description: Ubuntu jammy amd64 (20220827_07:42)
      image.os: Ubuntu
      image.release: jammy
      image.serial: "20220827_07:42"
      image.type: disk-kvm.img
      image.variant: cloud
      limits.cpu: "6"
      limits.memory: 16GiB
      security.syscalls.intercept.sysinfo: "true"
      volatile.base_image: f2581b6034a6d699f0ad75f813deedfd7617de75d91107d3f7c53fa8bb2fbba7
      volatile.cloud-init.instance-id: c40eb5b8-643f-4333-a6db-f3e3c975cd25
      volatile.eth0.host_name: maca42e0d83
      volatile.eth0.hwaddr: 00:16:3e:cc:ea:86
      volatile.eth0.last_state.created: "false"
      volatile.last_state.power: RUNNING
      volatile.uuid: 49e10686-c824-4324-9713-33199dd2f306
      volatile.vsock_id: "64"
    devices:
      us2204-dvm-dkr01_disk01:
        pool: sp01
        source: us2204-dvm-dkr01_disk01
        type: disk
      eth0:
        name: eth0
        nictype: macvlan
        parent: eno1
        type: nic
      root:
        path: /
        pool: sp00
        size: 30GB
        type: disk
    ephemeral: false
    profiles:
    - default
    stateful: false
    description: ""

Windows VM

# lxc config show --expanded mw2022-dvm-mad01
    architecture: x86_64
    config:
      image.architecture: amd64
      image.description: MS WS 2022 S,D,c (20220530_2030)
      image.os: Windows
      image.release: "2022"
      image.serial: "20220530_2030"
      image.type: virtual-machine
      image.variant: Standard, Desktop Experience, cloudbase-init
      limits.cpu: "4"
      limits.memory: 6GiB
      security.syscalls.intercept.sysinfo: "true"
      volatile.base_image: 42e2e67fc989aa3e3e5704883eb7222a3aee3b215ec3b632865f42c2b7d18d3c
      volatile.cloud-init.instance-id: 42b93059-7b38-4017-b52f-1fb3c0115f0f
      volatile.eth0.host_name: mace1713df6
      volatile.eth0.hwaddr: 00:16:3e:8a:35:ed
      volatile.eth0.last_state.created: "false"
      volatile.last_state.power: RUNNING
      volatile.uuid: ccf95b32-7d6e-4deb-b3e7-e2feeb3ef273
      volatile.vsock_id: "54"
    devices:
      eth0:
        name: eth0
        nictype: macvlan
        parent: eno1
        type: nic
      root:
        path: /
        pool: sp00
        type: disk
    ephemeral: false
    profiles:
    - default
    stateful: false
    description: ""
tomponline commented 1 year ago

OK interesting so looking at the us2204-dvm-dkr01 instance.

This means when the VM started up it assigned the MAC address 00:16:3e:cc:ea:86 to the interface.

volatile.eth0.host_name: maca42e0d83
volatile.eth0.hwaddr: 00:16:3e:cc:ea:86

And according to your interface inside the VM (from earlier), we see also see its 00:16:3e:cc:ea:86:

root@us2204-dvm-dkr01:~# ip link
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 00:16:3e:cc:ea:86 brd ff:ff:ff:ff:ff:ff
    3: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
        link/ether 02:42:2d:fa:b9:7b brd ff:ff:ff:ff:ff:ff
    4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
        link/ether 02:42:42:ab:ce:6c brd ff:ff:ff:ff:ff:ff
    10: vethc3bc285@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default
        link/ether ae:35:65:2a:d4:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 1

Which is also what the current host-side interface shows 00:16:3e:cc:ea:86:

--> 137: maca42e0d83@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:cc:ea:86 brd ff:ff:ff:ff:ff:ff
markrattray commented 1 year ago

Sorry, not getting what you mean by the MAC addresses...

and no these two VMs have not been rebooted yet.

tomponline commented 1 year ago

OK I was getting confused between the different VMs, too much scrolling. I've updated my last post and all seems to line up.

tomponline commented 1 year ago

So if you stop us2204-dvm-dkr01 does maca42e0d83 get removed?

markrattray commented 1 year ago

Yes, for us2204-dvm-dkr01 the device is removed when it stopped, so no entry via ip l.

tomponline commented 1 year ago

So it can start back up again?

markrattray commented 1 year ago

yes no probs there.

Windows VMs in this state won't and I remember an Ubuntu Desktop VM from the images repo didn't... they needed the device on the host deleted first.

tomponline commented 1 year ago

Please run lxc monitor --type=logging --pretty in one terminal window on the host where the Windows VM is running and then stop it using lxc stop -f <instance> and paste the logging output from the first window here, along with the output of ip l on the host. Thanks

markrattray commented 1 year ago

Good morning Tom

lxc monitor --type=logging --pretty:

time="2022-12-06T04:53:32-05:00" level=debug msg="Event listener server handler started" id=dbd9c7cb-c3d4-4777-a3c8-936106d5d803 local=/var/snap/lxd/common/lxd/unix.socket remote=@
time="2022-12-06T04:53:33-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:33-05:00" level=debug msg="Send seccomp notification for id(14489745358209183564)"
time="2022-12-06T04:53:33-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183564 seccomp_notify_mem_fd=71 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:34-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:34-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183565 seccomp_notify_mem_fd=71 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:34-05:00" level=debug msg="Send seccomp notification for id(14489745358209183565)"
time="2022-12-06T04:53:35-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:35-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183566 seccomp_notify_mem_fd=71 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:35-05:00" level=debug msg="Send seccomp notification for id(14489745358209183566)"
time="2022-12-06T04:53:36-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:36-05:00" level=debug msg="Send seccomp notification for id(14489745358209183567)"
time="2022-12-06T04:53:36-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183567 seccomp_notify_mem_fd=71 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:37-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:37-05:00" level=debug msg="Send seccomp notification for id(14489745358209183568)"
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183568 seccomp_notify_mem_fd=71 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:37-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:37-05:00" level=debug msg="Send seccomp notification for id(14489745358209183569)"
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183569 seccomp_notify_mem_fd=71 seccomp_notify_pid=749168 syscall_number=99
time="2022-12-06T04:53:37-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:37-05:00" level=debug msg="Send seccomp notification for id(14489745358209183570)"
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183570 seccomp_notify_mem_fd=71 seccomp_notify_pid=749173 syscall_number=99
time="2022-12-06T04:53:37-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:37-05:00" level=debug msg="Send seccomp notification for id(14489745358209183571)"
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183571 seccomp_notify_mem_fd=71 seccomp_notify_pid=749175 syscall_number=99
time="2022-12-06T04:53:37-05:00" level=debug msg="Syscall handler received fds 70(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:37-05:00" level=debug msg="Send seccomp notification for id(14489745358209183572)"
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183572 seccomp_notify_mem_fd=71 seccomp_notify_pid=749178 syscall_number=99
time="2022-12-06T04:53:37-05:00" level=debug msg="Heartbeat updating local raft members" members="[{{1 us2204-iph-lxd03.domain.tld:8443 voter} us2204-iph-lxd03.domain.tld}]"
time="2022-12-06T04:53:37-05:00" level=debug msg="Starting heartbeat round" local="us2204-iph-lxd03.domain.tld:8443" mode=normal
time="2022-12-06T04:53:37-05:00" level=debug msg="Completed heartbeat round" duration=2.89852ms local="us2204-iph-lxd03.domain.tld:8443"
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url=/1.0 username=root
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url="/1.0/events?project=pq" username=root
time="2022-12-06T04:53:37-05:00" level=debug msg="Event listener server handler started" id=c5581a13-a90d-46c1-888c-daca995fe7af local=/var/snap/lxd/common/lxd/unix.socket remote=@
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling API request" ip=@ method=PUT protocol=unix url="/1.0/instances/mw2022-qvm-mad01/state?project=pq" username=root
time="2022-12-06T04:53:37-05:00" level=debug msg="New operation" class=task description="Stopping instance" operation=ff3ab167-2be8-4a63-87d4-b49cced973ba project=pq
time="2022-12-06T04:53:37-05:00" level=debug msg="Started operation" class=task description="Stopping instance" operation=ff3ab167-2be8-4a63-87d4-b49cced973ba project=pq
time="2022-12-06T04:53:37-05:00" level=debug msg="Stop started" instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq stateful=false
time="2022-12-06T04:53:37-05:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url="/1.0/operations/ff3ab167-2be8-4a63-87d4-b49cced973ba?project=pq" username=root
time="2022-12-06T04:53:37-05:00" level=debug msg="Instance operation lock created" action=stop instance=mw2022-qvm-mad01 project=pq reusable=false
time="2022-12-06T04:53:37-05:00" level=debug msg="Instance stopped" instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq
time="2022-12-06T04:53:37-05:00" level=debug msg="Waiting for VM process to finish" instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq
time="2022-12-06T04:53:37-05:00" level=debug msg="onStop hook started" instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq target=stop
time="2022-12-06T04:53:37-05:00" level=debug msg="VM process finished" instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq
time="2022-12-06T04:53:38-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 71(/proc/<pid>/mem), and 114([seccomp notify])"
time="2022-12-06T04:53:38-05:00" level=debug msg="Send seccomp notification for id(14489745358209183573)"
time="2022-12-06T04:53:38-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=114 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183573 seccomp_notify_mem_fd=71 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:38-05:00" level=debug msg="Stopping device" device=nocloud instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq type=disk
time="2022-12-06T04:53:38-05:00" level=debug msg="Stopping device" device=root instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq type=disk
time="2022-12-06T04:53:38-05:00" level=debug msg="Stopping device" device=eth0 instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq type=nic
time="2022-12-06T04:53:38-05:00" level=debug msg="UnmountInstance started" instance=mw2022-qvm-mad01 project=pq
time="2022-12-06T04:53:38-05:00" level=debug msg="Failed to unmount" attempt=0 err="device or resource busy" path=/var/snap/lxd/common/lxd/storage-pools/sp00/virtual-machines/pq_mw2022-qvm-mad01
time="2022-12-06T04:53:38-05:00" level=debug msg="Unmounted ZFS dataset" dev=sp00/virtual-machines/pq_mw2022-qvm-mad01 driver=zfs path=/var/snap/lxd/common/lxd/storage-pools/sp00/virtual-machines/pq_mw2022-qvm-mad01 pool=sp00 volName=pq_mw2022-qvm-mad01
time="2022-12-06T04:53:38-05:00" level=debug msg="UnmountInstance finished" instance=mw2022-qvm-mad01 project=pq
time="2022-12-06T04:53:38-05:00" level=debug msg="Deactivated ZFS volume" dev=sp00/virtual-machines/pq_mw2022-qvm-mad01.block driver=zfs pool=sp00 volName=pq_mw2022-qvm-mad01
time="2022-12-06T04:53:38-05:00" level=debug msg="onStop hook finished" instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq target=stop
time="2022-12-06T04:53:38-05:00" level=debug msg="Instance operation lock finished" action=stop err="<nil>" instance=mw2022-qvm-mad01 project=pq reusable=false
time="2022-12-06T04:53:38-05:00" level=debug msg="Success for operation" class=task description="Stopping instance" operation=ff3ab167-2be8-4a63-87d4-b49cced973ba project=pq
time="2022-12-06T04:53:38-05:00" level=debug msg="Stop finished" instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq stateful=false
time="2022-12-06T04:53:38-05:00" level=debug msg="Event listener server handler stopped" listener=c5581a13-a90d-46c1-888c-daca995fe7af local=/var/snap/lxd/common/lxd/unix.socket remote=@
time="2022-12-06T04:53:39-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:39-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183574 seccomp_notify_mem_fd=70 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:39-05:00" level=debug msg="Send seccomp notification for id(14489745358209183574)"
time="2022-12-06T04:53:40-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:40-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:40-05:00" level=debug msg="Send seccomp notification for id(14489745358209183575)"
time="2022-12-06T04:53:40-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183575 seccomp_notify_mem_fd=70 seccomp_notify_pid=749797 syscall_number=99
time="2022-12-06T04:53:40-05:00" level=debug msg="Send seccomp notification for id(14489745358209183576)"
time="2022-12-06T04:53:40-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183576 seccomp_notify_mem_fd=70 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:40-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:40-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183577 seccomp_notify_mem_fd=70 seccomp_notify_pid=749800 syscall_number=99
time="2022-12-06T04:53:40-05:00" level=debug msg="Send seccomp notification for id(14489745358209183577)"
time="2022-12-06T04:53:41-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:41-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183578 seccomp_notify_mem_fd=70 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:41-05:00" level=debug msg="Send seccomp notification for id(14489745358209183578)"
time="2022-12-06T04:53:42-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:42-05:00" level=debug msg="Send seccomp notification for id(14489745358209183579)"
time="2022-12-06T04:53:42-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183579 seccomp_notify_mem_fd=70 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:43-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:43-05:00" level=debug msg="Send seccomp notification for id(14489745358209183580)"
time="2022-12-06T04:53:43-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183580 seccomp_notify_mem_fd=70 seccomp_notify_pid=618570 syscall_number=99
time="2022-12-06T04:53:43-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:43-05:00" level=debug msg="Send seccomp notification for id(14489745358209183581)"
time="2022-12-06T04:53:43-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183581 seccomp_notify_mem_fd=70 seccomp_notify_pid=749810 syscall_number=99
time="2022-12-06T04:53:43-05:00" level=debug msg="Syscall handler received fds 67(/proc/<pid>), 70(/proc/<pid>/mem), and 71([seccomp notify])"
time="2022-12-06T04:53:43-05:00" level=debug msg="Send seccomp notification for id(14489745358209183582)"
time="2022-12-06T04:53:43-05:00" level=debug msg="Handling sysinfo syscall" audit_architecture=3221225534 container=us2204-isc-vtm01 project="{{map[features.images:true features.networks:true features.profiles:true features.storage.buckets:true features.storage.volumes:true] Default LXD project} default []}" seccomp_notify_fd=71 seccomp_notify_flags=0 seccomp_notify_id=14489745358209183582 seccomp_notify_mem_fd=70 seccomp_notify_pid=749811 syscall_number=99

ip l:

# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 10000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 10000
    link/ether 90:b1:1c:2a:a1:d4 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 10000
    link/ether 90:b1:1c:2a:a1:d5 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 10000
    link/ether 90:b1:1c:2a:a1:d6 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f0
5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 10000
    link/ether 90:b1:1c:2a:a1:d7 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f1
6: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether e0:db:55:06:77:c3 brd ff:ff:ff:ff:ff:ff
113: mac38af0240@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:90:94:89 brd ff:ff:ff:ff:ff:ff
118: mace1713df6@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:8a:35:ed brd ff:ff:ff:ff:ff:ff
130: mac83bcc53e@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:a5:13:0b brd ff:ff:ff:ff:ff:ff
132: maccf88b6f4@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:1d:71:a5 brd ff:ff:ff:ff:ff:ff
133: mac1a1709f2@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:f9:d2:d5 brd ff:ff:ff:ff:ff:ff
134: macbb32a1cb@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:0c:57:c6 brd ff:ff:ff:ff:ff:ff
139: mac05660c50@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:c1:5f:52 brd ff:ff:ff:ff:ff:ff
140: macc15cacf7@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:a0:bf:dd brd ff:ff:ff:ff:ff:ff
143: macf3522c51@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:5b:3a:88 brd ff:ff:ff:ff:ff:ff
144: mac231cccb5@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:f3:dd:b2 brd ff:ff:ff:ff:ff:ff
145: macc2b65fbc@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:f7:e4:ce brd ff:ff:ff:ff:ff:ff
146: macda472719@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:12:55:8d brd ff:ff:ff:ff:ff:ff
147: mac51fd35aa@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:1b:a0:f1 brd ff:ff:ff:ff:ff:ff
152: mac731bd6e2@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:6a:a0:00 brd ff:ff:ff:ff:ff:ff
155: macceda4f28@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:cc:ea:86 brd ff:ff:ff:ff:ff:ff
156: mac2ea24845@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:d8:f4:3c brd ff:ff:ff:ff:ff:ff
157: mace4b3f948@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:d9:88:f1 brd ff:ff:ff:ff:ff:ff

instance config:

# lxc config show --expanded mw2022-qvm-mad01 --project pq
architecture: x86_64
config:
  image.architecture: amd64
  image.description: MS WS 2022 S,D,c (20220530_2030)
  image.os: Windows
  image.release: "2022"
  image.serial: "20220530_2030"
  image.type: virtual-machine
  image.variant: Standard, Desktop Experience, cloudbase-init
  limits.cpu: "4"
  limits.memory: 6GiB
  security.syscalls.intercept.sysinfo: "true"
  volatile.base_image: 42e2e67fc989aa3e3e5704883eb7222a3aee3b215ec3b632865f42c2b7d18d3c
  volatile.cloud-init.instance-id: f34f0a25-a0c7-43c3-a07f-63af4c6d731d
  volatile.eth0.hwaddr: 00:16:3e:48:ed:af
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: dad8e897-0c10-419c-8a62-db4d8ce3eba9
  volatile.vsock_id: "57"
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: sp00
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
tomponline commented 1 year ago

Thanks, so we can see the NIC device being removed without error:

time="2022-12-06T04:53:38-05:00" level=debug msg="Stopping device" device=eth0 instance=mw2022-qvm-mad01 instanceType=virtual-machine project=pq type=nic

We can also infer that because the volatile.eth0.host_name: mace1713df6 setting has been removed, the post hook that actually removes the macvlan interface has run, as that is what wipes this setting:

https://github.com/lxc/lxd/blob/master/lxd/device/nic_macvlan.go#L277-L284

And the code that removes the interface is:

https://github.com/lxc/lxd/blob/master/lxd/device/nic_macvlan.go#L289-L295

So I suppose that the snap's mount namespace has the correct view of the interfaces in /sys/class/net/:

Please show output of:

sudo nsenter --mount=/run/snapd/ns/lxd.mnt -- ls -la /sys/class/net/
tomponline commented 1 year ago

Also, the MAC address appears to have changed since yesterday:

Yesterday

volatile.eth0.hwaddr: 00:16:3e:8a:35:ed

Now:

volatile.eth0.hwaddr: 00:16:3e:48:ed:af
markrattray commented 1 year ago

Thanks for your help.

Re MAC change

This is a different windows VM: mw2022-qvm-mad01 which a QA MS Active Directory Domain Controller. Yesterday's was the Dev Domain DC (...-dvm-...). Not our real naming convention but I thought it will be useful to see the OS and purpose for this troubleshooting and privacy.

sudo nsenter --mount=/run/snapd/ns/lxd.mnt -- ls -la /sys/class/net/

# sudo nsenter --mount=/run/snapd/ns/lxd.mnt -- ls -la /sys/class/net/
total 0
drwxr-xr-x  2 root root    0 Oct 27 17:17 .
drwxr-xr-x 77 root root    0 Oct 27 17:17 ..
-rw-r--r--  1 root root 4096 Nov 15 21:28 bonding_masters
lrwxrwxrwx  1 root root    0 Oct 27 17:18 eno1 -> ../../devices/pci0000:00/0000:00:01.1/0000:01:00.0/net/eno1
lrwxrwxrwx  1 root root    0 Oct 27 17:18 eno2 -> ../../devices/pci0000:00/0000:00:01.1/0000:01:00.1/net/eno2
lrwxrwxrwx  1 root root    0 Oct 27 17:18 eno3 -> ../../devices/pci0000:00/0000:00:01.0/0000:02:00.0/net/eno3
lrwxrwxrwx  1 root root    0 Oct 27 17:18 eno4 -> ../../devices/pci0000:00/0000:00:01.0/0000:02:00.1/net/eno4
lrwxrwxrwx  1 root root    0 Oct 27 17:18 idrac -> ../../devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.6/1-1.6.3/1-1.6.3:1.0/net/idrac
lrwxrwxrwx  1 root root    0 Oct 27 17:17 lo -> ../../devices/virtual/net/lo
lrwxrwxrwx  1 root root    0 Nov 22 07:15 mac05660c50 -> ../../devices/virtual/net/mac05660c50
lrwxrwxrwx  1 root root    0 Nov 22 07:12 mac1a1709f2 -> ../../devices/virtual/net/mac1a1709f2
lrwxrwxrwx  1 root root    0 Nov 22 07:45 mac231cccb5 -> ../../devices/virtual/net/mac231cccb5
lrwxrwxrwx  1 root root    0 Dec  5 11:33 mac2ea24845 -> ../../devices/virtual/net/mac2ea24845
lrwxrwxrwx  1 root root    0 Nov 21 05:52 mac38af0240 -> ../../devices/virtual/net/mac38af0240
lrwxrwxrwx  1 root root    0 Nov 22 08:29 mac51fd35aa -> ../../devices/virtual/net/mac51fd35aa
lrwxrwxrwx  1 root root    0 Nov 22 11:48 mac731bd6e2 -> ../../devices/virtual/net/mac731bd6e2
lrwxrwxrwx  1 root root    0 Nov 22 07:11 mac83bcc53e -> ../../devices/virtual/net/mac83bcc53e
lrwxrwxrwx  1 root root    0 Nov 22 07:12 macbb32a1cb -> ../../devices/virtual/net/macbb32a1cb
lrwxrwxrwx  1 root root    0 Nov 22 07:15 macc15cacf7 -> ../../devices/virtual/net/macc15cacf7
lrwxrwxrwx  1 root root    0 Nov 22 07:49 macc2b65fbc -> ../../devices/virtual/net/macc2b65fbc
lrwxrwxrwx  1 root root    0 Dec  5 11:26 macceda4f28 -> ../../devices/virtual/net/macceda4f28
lrwxrwxrwx  1 root root    0 Nov 22 07:12 maccf88b6f4 -> ../../devices/virtual/net/maccf88b6f4
lrwxrwxrwx  1 root root    0 Nov 22 08:28 macda472719 -> ../../devices/virtual/net/macda472719
lrwxrwxrwx  1 root root    0 Nov 21 06:20 mace1713df6 -> ../../devices/virtual/net/mace1713df6
lrwxrwxrwx  1 root root    0 Dec  5 11:36 mace4b3f948 -> ../../devices/virtual/net/mace4b3f948
lrwxrwxrwx  1 root root    0 Nov 22 07:45 macf3522c51 -> ../../devices/virtual/net/macf3522c51
es/virtual/net/macf3522c51

Just to add if it's of any use, at the moment I have the following VMs (x86_64) remaining that are still in this state:

tomponline commented 1 year ago

OK I'm getting quite confused now.

Maybe im not explaining properly, so lets refresh.

I'm first going to solve the issue of why the host-side macvlan interface isn't being removed.

To see whats happening I need to the following when the instance is running then again afterwards when its been stopped with lxc stop <instance> --force

Also before the stop is initiated I need the lxc monitor --type=logging --pretty command running to capture the stopping debug logs.