Open nrgaway opened 4 years ago
I can't say I've ever encountered this before. From your backtrace it looks like you're hitting a manual assertion in QEMU in the ivshmem hotplug path.
Once thing I noticed is that your VM log shows a lot of ivshmem devices being added and removed in short succession. From some investigation, the failing code in QEMU seems to be detecting whether a memory region with the same internal device path as an existing region is being added. Maybe by adding/removing devices quickly, kvmchand is somehow hitting a condition in QEMU where a new ivshmem device is added at the same device path as a recently removed device before it has been fully removed?
I'm trying to decipher the logs to determine why ivshmem devices are being added/removed so quickly, but I can't really make sense of it. I know that kvmchand will try to remove ivshmem devices if the process that created the corresponding vchan exits, so maybe there's a daemon that's starting up, creating a vchan, failing somehow and exiting, then getting immediately restarted by systemd? This could be happening on either the host or the VM. Do you think that there's anything like this that could be happening?
I will check if anything is getting immediately restarted by systemd after failing. I reviewed the log above again and noticed right before the crash when qrexec-client-vm dom0 qubes.NotifyUpdates /bin/sh -c 'echo 1'
and qrexec-client-vm dom0 qubes.SyncAppMenus /etc/qubes-rpc/qubes.GetAppmenus
that no vfio-pci
devices were created or logged as being created. Normally after qrexec-client-vm
runs I would expect logs for qubes.NotifyUpdates
similar to the following:
--- HOST -----------------------------------------------------------------------
Sep 18 06:18:15 localhost.localdomain qrexec-policy-daemon[1369149]: qrexec: qubes.NotifyUpdates+: fc32 -> @adminvm: allowed to @adminvm
Sep 18 06:18:15 localhost.localdomain kvmchand[1369129]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
Sep 18 06:18:15 localhost.localdomain kvmchand[1369120]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 11 port 513
Sep 18 06:18:15 localhost.localdomain kvmchand[1369120]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 11, client_dom: 0, port 513, read_min: 65536, write_min: 65536
Sep 18 06:18:15 localhost.localdomain kvmchand[1369122]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 7
Sep 18 06:18:15 localhost.localdomain kvmchand[1369125]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1369919
Sep 18 06:18:20 localhost.localdomain kvmchand[1369129]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 12
--- VM -------------------------------------------------------------------------
[ 705.071199] pci 0000:00:13.0: [1af4:1110] type 00 class 0x050000
[ 705.072266] pci 0000:00:13.0: reg 0x10: [mem 0x00000000-0x000000ff]
[ 705.073093] pci 0000:00:13.0: reg 0x14: [mem 0x00000000-0x00000fff]
[ 705.073954] pci 0000:00:13.0: reg 0x18: [mem 0x00000000-0x0003ffff 64bit pref]
[ 705.075770] pci 0000:00:13.0: BAR 2: assigned [mem 0x800140000-0x80017ffff 64bit pref]
[ 705.076940] pci 0000:00:13.0: BAR 1: assigned [mem 0xc8450000-0xc8450fff]
[ 705.077901] pci 0000:00:13.0: BAR 0: assigned [mem 0xc8451000-0xc84510ff]
[ 705.078931] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.079356] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.080869] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.081996] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.083764] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.084194] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.085544] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.086682] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.089555] vfio-pci 0000:00:13.0: Adding to iommu group 3
[ 705.090098] vfio-pci 0000:00:13.0: Adding kernel taint for vfio-noiommu group on device
[ 705.090831] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.091385] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.093527] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.094836] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.096532] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.096992] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.098220] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.099249] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.102371] pci_bus 0000:01: Allocating resources
[ 705.102374] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.102831] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.104102] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.105249] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.107011] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.107444] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.108783] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.109918] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.114034] pci_bus 0000:02: Allocating resources
[ 705.114039] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.114677] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.116383] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.117563] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.119124] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.119563] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.120739] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.122138] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.123881] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.124335] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.125514] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.126539] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.128154] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.128590] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.129779] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.130892] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.133603] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.134046] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.135627] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.137291] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.138925] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.139373] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.140549] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.141592] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.145056] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.145702] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.147513] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.148588] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.150120] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.150538] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.151697] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.152672] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.154184] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.154608] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.156275] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.157416] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.159051] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.159477] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.160652] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.161649] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.163210] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.163636] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.164875] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.166946] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.169307] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.169993] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.171388] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.172489] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.174107] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.174569] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.175795] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.178211] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.180040] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.180737] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.182369] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.183422] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.184998] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.185454] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.186648] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.188078] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.189891] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.190366] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.191613] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.192669] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.194289] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.194746] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.195933] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.196973] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.199615] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.200066] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.201482] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.202646] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.204302] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.204770] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.205973] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.207013] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.209845] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.210341] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.211744] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.213274] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.215128] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.215555] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.216711] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.217686] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.219175] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.219605] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.221222] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.222344] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.223962] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.224382] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.225533] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.226520] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.228011] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.228425] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.229562] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.230666] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.233838] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.234262] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.235827] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.237212] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.238711] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.239129] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.240268] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.241246] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.243822] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.244252] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.245908] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.247317] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.248815] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.249237] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.250346] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.251319] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.252825] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.253243] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.255211] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.256431] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.258674] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.259239] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.260328] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.261270] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.262709] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.263109] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.264204] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.265158] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.267157] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.267563] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.268817] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.269803] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 705.271289] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 705.271696] pci 0000:00:02.0: bridge window [io 0xd000-0xdfff]
[ 705.272806] pci 0000:00:02.0: bridge window [mem 0xc8200000-0xc83fffff]
[ 705.273762] pci 0000:00:02.0: bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[ 705.275246] pci 0000:00:03.0: PCI bridge to [bus 02]
[ 705.276383] pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
[ 705.277803] pci 0000:00:03.0: bridge window [mem 0xc8000000-0xc81fffff]
[ 705.278929] pci 0000:00:03.0: bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[ 709.071944] vfio-pci 0000:00:13.0: vfio-noiommu device opened by user (kvmchand:517)
[ 710.100345] vfio-pci 0000:00:13.0: Relaying device request to user (#0)
[ 710.108227] vfio-pci 0000:00:13.0: Removing from iommu group 3
I also tried running SyncAppMenus on its own and have varied results:
localhost.localdomain kvmchand[1372872]: malloc_consolidate(): invalid chunk size
Most likely the issue is with SyncAppMenus so I will work figuring out why it is failing and also try increasing the qrexec-client timeout alarm to something like 60 seconds (default is 2, I already increased to 10) to see if that makes any difference.
I've been trying to figure out this issue and am making some progress. I have found some other issues along the way, which when resolved, may render this one moot. I have added more debugging logs to check if anything is getting immediately restarted by systemd after failing. While I am not quite sure if systemd is restarting anything I can see by the host logs below that qrexec-policy-daemon
does immediately attempt to establish connection to VM.
The following log is generated when running qrexec-client-vm dom0 qubes.SyncAppMenus /etc/qubes-rpc/qubes.GetAppmenus
on the VM which calls the host qubes.SyncAppMenus
RPC script. The host script then connects to back the VM to the qubes.GetAppmenus` script which will pass the contents of all the .desktop files on the VM back to the host. Once the list is received by the host, the host will reconnect to the VM once per every .desktop file (~94 times) to get the Icons.
06:04:25 HOST qrexec-policy-daemon[1524778]: qrexec: qubes.SyncAppMenus+: fc32 -> @adminvm: allowed to @adminvm
06:04:25 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:25.951 qrexec-client[1525770]: qrexec-client.c:611:main: client_init: Connecting to 45, port=513, timeout=60
06:04:25 HOST kvmchand[1524758]: [INFO] daemon/HOSTlient connected! fd: 12
06:04:25 HOST kvmchand[1524749]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 45 port 513
06:04:25 HOST kvmchand[1524749]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 45, client_dom: 0, port 513, read_min: 65536, write_min: 65536
06:04:25 HOST kvmchand[1524751]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 5
06:04:25 HOST kvmchand[1524753]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1525656
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: qrexec-client.c:630:main: data_vchan connected
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=8
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=8, size=8
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=4
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=4, size=4
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: qrexec-client.c:632:main: data_protocol_version: 3
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: qrexec-client.c:635:main: prepare_ret: 0
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: qrexec-client.c:638:main: select_loop...
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: remote.c:65:handle_remote_data: handle_remote_data: while loop
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65536
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65536, size=65536
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: remote.c:65:handle_remote_data: handle_remote_data: while loop
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65491
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65490, size=65491
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=65490, ret=1, size=65491
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: remote.c:65:handle_remote_data: handle_remote_data: while loop
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65536
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=-1, size=65536
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: txrx-vchan.c:81:read_vchan_all: read_vchan_all: ret < 0, ret=-1
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: remote.c:76:handle_remote_data: handle_remote_data: !read_vchan_all
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: process_io.c:36:handle_vchan_error: Error while vchan read, exiting
06:04:40 HOST kvmchand[1524758]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 12
06:04:40 HOST qrexec-policy-daemon[1524778]: error calling qrexec-policy-agent in dom0
06:04:40 HOST qrexec-policy-daemon[1524778]: Traceback (most recent call last):
06:04:40 HOST qrexec-policy-daemon[1524778]: File "/usr/lib/python3.8/site-packages/qrexec/tools/qrexec_policy_exec.py", line 133, in execute
06:04:40 HOST qrexec-policy-daemon[1524778]: await super().execute(caller_ident)
06:04:40 HOST qrexec-policy-daemon[1524778]: File "/usr/lib/python3.8/site-packages/qrexec/policy/parser.py", line 528, in execute
06:04:40 HOST qrexec-policy-daemon[1524778]: raise ExecutionFailed('qrexec-client failed: {}'.format(command))
06:04:40 HOST qrexec-policy-daemon[1524778]: qrexec.exc.ExecutionFailed: qrexec-client failed: ['/usr/lib/qubes/qrexec-client', '-d', '@adminvm', '-c', 'SOCKET10,fc32,45', '-E', 'QUBESRPC qubes.SyncAppMenus+ fc32 keyword adminvm']
06:04:40 HOST qrexec-policy-daemon[1524778]: During handling of the above exception, another exception occurred:
06:04:40 HOST qrexec-policy-daemon[1524778]: Traceback (most recent call last):
06:04:40 HOST qrexec-policy-daemon[1524778]: File "/usr/lib/python3.8/site-packages/qrexec/tools/qrexec_policy_exec.py", line 151, in notify
06:04:40 HOST qrexec-policy-daemon[1524778]: await call_socket_service(guivm, service, source_domain, params)
06:04:40 HOST qrexec-policy-daemon[1524778]: File "/usr/lib/python3.8/site-packages/qrexec/server.py", line 105, in call_socket_service_local
06:04:40 HOST qrexec-policy-daemon[1524778]: reader, writer = await asyncio.open_unix_connection(path)
06:04:40 HOST qrexec-policy-daemon[1524778]: File "/usr/lib64/python3.8/asyncio/streams.py", line 111, in open_unix_connection
06:04:40 HOST qrexec-policy-daemon[1524778]: transport, _ = await loop.create_unix_connection(
06:04:40 HOST qrexec-policy-daemon[1524778]: File "/usr/lib64/python3.8/asyncio/unix_events.py", line 244, in create_unix_connection
06:04:40 HOST qrexec-policy-daemon[1524778]: await self.sock_connect(sock, path)
06:04:40 HOST qrexec-policy-daemon[1524778]: File "/usr/lib64/python3.8/asyncio/selector_events.py", line 494, in sock_connect
06:04:40 HOST qrexec-policy-daemon[1524778]: return await fut
06:04:40 HOST qrexec-policy-daemon[1524778]: File "/usr/lib64/python3.8/asyncio/selector_events.py", line 499, in _sock_connect
06:04:40 HOST qrexec-policy-daemon[1524778]: sock.connect(address)
06:04:40 HOST qrexec-policy-daemon[1524778]: FileNotFoundError: [Errno 2] No such file or directory
06:04:40 HOST qrexec-policy-daemon[1524778]: qrexec: qubes.SyncAppMenus: fc32 -> dom0: error while executing: qrexec-client failed: ['/usr/lib/qubes/qrexec-client', '-d', '@adminvm', '-c', 'SOCKET10,fc32,45', '-E', 'QUBESRPC qubes.SyncAppMenus+ fc32 keyword adminvm']
# These are started right after the failure which I am guessing is actually causing the crash
06:04:40 HOST qubes.SyncAppMenus+-fc32[1525797]: fc32: Updating gnome-system-monitor-kde
06:04:40 HOST qrexec-policy-daemon[1525795]: fc32: Updating gnome-system-monitor-kde
06:04:40 HOST kvmchand[1524758]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
06:04:40 HOST kvmchand[1524749]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 45, port 514, read_min: 65536, write_min: 65536
06:04:40 HOST kvmchand[1524751]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 6
06:04:40 HOST kvmchand[1524753]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1525656
06:04:47 HOST kvmchand[1524758]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 12
06:04:47 HOST qrexec-policy-daemon[1525795]: fc32: Updating org.gnome.clocks
06:04:47 HOST qubes.SyncAppMenus+-fc32[1525797]: fc32: Updating org.gnome.clocks
06:04:47 HOST kvmchand[1524758]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
06:04:47 HOST kvmchand[1524749]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 45, port 514, read_min: 65536, write_min: 65536
06:04:47 HOST kvmchand[1524751]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 7
06:04:47 HOST kvmchand[1524753]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1525656
06:04:47 HOST audit[1525656]: ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 pid=1525656 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=6 res=1
So with the patch from #23 applied, are you still seeing similar behavior where vchans are created and destroyed in short succession? Maybe I'll need to change kvmchand to be more lazy with ivshmem device removal so that devices don't get removed until one of the domains shuts down or N amount of vchans are closed to work around this.
With #23 applied the behaviour is similar. The difference being is that SyncAppMenus
does not fail when receiving list of application .desktop files. Then when connecting guest to get icons, the first icon is received, and crash is triggered when attempting a get second icon.
EDIT: Replaced logs to contain QEMU stack trace and added kvmchand backtrace (forgot to set prlimit for _qemu-system-x8664).
Following is mostly a full host log in relation to the issue:
22:38:06 HOST qrexec-policy-daemon[73188]: qrexec: qubes.SyncAppMenus+: fc32 -> @adminvm: allowed to @adminvm
22:38:06 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:06.603 qrexec-client[76114]: qrexec-client.c:611:main: client_init: Connecting to 7, port=513, timeout=60
22:38:06 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
22:38:06 HOST kvmchand[73159]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 7 port 513
22:38:06 HOST kvmchand[73159]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 7, client_dom: 0, port 513, read_min: 65536, write_min: 65536
22:38:06 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 6
22:38:06 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 75117
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: qrexec-client.c:630:main: data_vchan connected
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=8
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=8, size=8
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=4
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=4, size=4
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: qrexec-client.c:632:main: data_protocol_version: 3
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: qrexec-client.c:635:main: prepare_ret: 0
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: qrexec-client.c:638:main: select_loop...
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65536
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65536, size=65536
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65491
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65491, size=65491
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65536
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65536, size=65536
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20525
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20525, size=20525
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.624 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.624 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.624 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=24576, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.625 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.626 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.626 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=16384, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.627 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.627 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=8192
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.627 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=8192, size=8192
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=24576, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=12288
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=12288, size=12288
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.634 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.634 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=36864
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.634 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=36864, size=36864
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.635 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.635 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.635 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=16384, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.636 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.636 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=4096
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.636 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=4096, size=4096
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=24576, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=24576, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=32768
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=32768, size=32768
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.640 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.640 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.640 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.640 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.642 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.642 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.642 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=12288
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=12288, size=12288
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=16384, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=32768
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.646 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=32768, size=32768
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.646 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.646 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.646 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=34418
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=34418, size=34418
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=0
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: remote.c:92:handle_remote_data: handle_remote_data: hdr.len==0, REMOTE_EOF
# SyncAppMenus - Get icon: First icon transfered, Second triggers coredump?
22:38:21 HOST qubes.SyncAppMenus+-fc32[76150]: Warning: ignoring key 'Comment' of org.keepassxc.KeePassXC
22:38:21 HOST qrexec-policy-daemon[76148]: Warning: ignoring key 'Comment' of org.keepassxc.KeePassXC
22:38:21 HOST qrexec-policy-daemon[76148]: fc32: Updating gnome-system-monitor-kde
22:38:21 HOST qrexec-policy-daemon[76148]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=org.gnome.SystemMonitor
22:38:21 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Updating gnome-system-monitor-kde
22:38:21 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=org.gnome.SystemMonitor
22:38:21 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:620: Client connected! fd: 13
22:38:21 HOST kvmchand[73159]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 7, port 514, read_min: 65536, write_min: 65536
22:38:21 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 7
22:38:21 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 75117
22:38:27 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 13
22:38:27 HOST qrexec-policy-daemon[76148]: fc32: Got icon
22:38:27 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Got icon
# Second icon
22:38:27 HOST qrexec-policy-daemon[76148]: fc32: Updating org.gnome.clocks
22:38:27 HOST qrexec-policy-daemon[76148]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=org.gnome.clocks
22:38:27 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Updating org.gnome.clocks
22:38:27 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=org.gnome.clocks
22:38:27 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:620: Client connected! fd: 13
22:38:27 HOST kvmchand[73159]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 7, port 514, read_min: 65536, write_min: 65536
22:38:27 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 8
22:38:27 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 75117
# Coredump start
22:38:27 HOST audit[75117]: ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 pid=75117 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=6 res=1
22:38:28 HOST systemd[1]: Started Process Core Dump (PID 76207/UID 0).
22:38:28 HOST audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@2-76207-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:28 HOST kvmchand[73165]: [ERROR] daemon/ivshmem.c:905: Received unknown message from client on fd 41
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 35
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 29
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 41
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 23
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 10
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:501: Stopping listener thread.
22:38:28 HOST libvirtd[2728]: Unable to read from monitor: Connection reset by peer
22:38:28 HOST kvmchand[73161]: libvirt: QEMU Driver error : Unable to read from monitor: Connection reset by peer
22:38:28 HOST kvmchand[73161]: [71B blob data]
22:38:28 HOST kvmchand[73161]: [ERROR] daemon/libvirt.c:710: Failed to attach ivshmem to id 7!
22:38:28 HOST kvmchand[73159]: [ERROR] daemon/connections.c:238: Failed to attach ivshmem device to client domain 7.
22:38:28 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 13
22:38:28 HOST qrexec-policy-daemon[76148]: fc32: Failed to get icon for org.gnome.clocks: No icon received
22:38:28 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Failed to get icon for org.gnome.clocks: No icon received
22:38:28 HOST qrexec-policy-daemon[76148]: fc32: Updating exo-mail-reader
22:38:28 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Updating exo-mail-reader
22:38:28 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=emblem-mail
22:38:28 HOST qrexec-policy-daemon[76148]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=emblem-mail
# --- QEMU COREDUMP ------------------------------------------------------------
22:38:28 HOST systemd-machined[2431]: Machine qemu-7-fc32 terminated.
22:38:28 HOST systemd-coredump[76208]: Process 75117 (qemu-system-x86) of user 107 dumped core.
Stack trace of thread 75117:
#0 0x00007f9ef00839e5 __GI_raise (libc.so.6 + 0x3c9e5)
#1 0x00007f9ef006c895 __GI_abort (libc.so.6 + 0x25895)
#2 0x00005585f614d474 qemu_ram_set_idstr (qemu-system-x86_64 + 0x2f1474)
#3 0x00005585f6437fa5 vmstate_register_ram (qemu-system-x86_64 + 0x5dbfa5)
#4 0x00005585f639281a ivshmem_common_realize (qemu-system-x86_64 + 0x53681a)
#5 0x00005585f63c2573 pci_qdev_realize (qemu-system-x86_64 + 0x566573)
#6 0x00005585f6332b55 device_set_realized (qemu-system-x86_64 + 0x4d6b55)
#7 0x00005585f64a257b property_set_bool (qemu-system-x86_64 + 0x64657b)
#8 0x00005585f64a78b4 object_property_set_qobject (qemu-system-x86_64 + 0x64b8b4)
#9 0x00005585f64a52ba object_property_set_bool (qemu-system-x86_64 + 0x6492ba)
#10 0x00005585f62c7f48 qdev_device_add (qemu-system-x86_64 + 0x46bf48)
#11 0x00005585f62c82ab qmp_device_add (qemu-system-x86_64 + 0x46c2ab)
#12 0x00005585f654b2c0 do_qmp_dispatch (qemu-system-x86_64 + 0x6ef2c0)
#13 0x00005585f64496ed monitor_qmp_dispatch (qemu-system-x86_64 + 0x5ed6ed)
#14 0x00005585f6449dac monitor_qmp_bh_dispatcher (qemu-system-x86_64 + 0x5eddac)
#15 0x00005585f6594dac aio_bh_call (qemu-system-x86_64 + 0x738dac)
#16 0x00005585f6598494 aio_dispatch (qemu-system-x86_64 + 0x73c494)
#17 0x00005585f6594a32 aio_ctx_dispatch (qemu-system-x86_64 + 0x738a32)
#18 0x00007f9ef141d78f g_main_dispatch (libglib-2.0.so.0 + 0x5278f)
#19 0x00005585f65976b8 glib_pollfds_poll (qemu-system-x86_64 + 0x73b6b8)
#20 0x00005585f62cc2af main_loop (qemu-system-x86_64 + 0x4702af)
#21 0x00005585f615379c main (qemu-system-x86_64 + 0x2f779c)
#22 0x00007f9ef006e042 __libc_start_main (libc.so.6 + 0x27042)
#23 0x00005585f6156afe _start (qemu-system-x86_64 + 0x2faafe)
Stack trace of thread 75131:
#0 0x00007f9ef0224750 __lll_lock_wait (libpthread.so.0 + 0x13750)
#1 0x00007f9ef021ce53 __GI___pthread_mutex_lock (libpthread.so.0 + 0xbe53)
#2 0x00005585f659a76d qemu_mutex_lock_impl (qemu-system-x86_64 + 0x73e76d)
#3 0x00005585f61a280e qemu_mutex_lock_iothread_impl (qemu-system-x86_64 + 0x34680e)
#4 0x00005585f65ad8e6 call_rcu_thread (qemu-system-x86_64 + 0x7518e6)
#5 0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
#6 0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
#7 0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 75140:
#0 0x00007f9ef013daaf __GI___poll (libc.so.6 + 0xf6aaf)
#1 0x00007f9ef141daae g_main_context_poll (libglib-2.0.so.0 + 0x52aae)
#2 0x00007f9ef141de33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
#3 0x00007f9ef0cd62db red_worker_main (libspice-server.so.1 + 0x542db)
#4 0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
#5 0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 75137:
#0 0x00007f9ef013f3bb ioctl (libc.so.6 + 0xf83bb)
#1 0x00005585f61be519 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x362519)
#2 0x00005585f61be5d9 kvm_cpu_exec (qemu-system-x86_64 + 0x3625d9)
#3 0x00005585f61a2aac qemu_kvm_cpu_thread_fn (qemu-system-x86_64 + 0x346aac)
#4 0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
#5 0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
#6 0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 75135:
#0 0x00007f9ef013daaf __GI___poll (libc.so.6 + 0xf6aaf)
#1 0x00007f9ef141daae g_main_context_poll (libglib-2.0.so.0 + 0x52aae)
#2 0x00007f9ef141de33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
#3 0x00005585f62c5fb1 iothread_run (qemu-system-x86_64 + 0x469fb1)
#4 0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
#5 0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
#6 0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 76112:
#0 0x00007f9ef0223c58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
#1 0x00007f9ef0223d83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
#2 0x00005585f659ad6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
#3 0x00005585f6595864 worker_thread (qemu-system-x86_64 + 0x739864)
#4 0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
#5 0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
#6 0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 75136:
#0 0x00007f9ef013f3bb ioctl (libc.so.6 + 0xf83bb)
#1 0x00005585f61be519 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x362519)
#2 0x00005585f61be5d9 kvm_cpu_exec (qemu-system-x86_64 + 0x3625d9)
#3 0x00005585f61a2aac qemu_kvm_cpu_thread_fn (qemu-system-x86_64 + 0x346aac)
#4 0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
#5 0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
#6 0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 76118:
#0 0x00007f9ef0223c58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
#1 0x00007f9ef0223d83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
#2 0x00005585f659ad6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
#3 0x00005585f6595864 worker_thread (qemu-system-x86_64 + 0x739864)
#4 0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
#5 0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
#6 0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 76199:
#0 0x00007f9ef0223c58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
#1 0x00007f9ef0223d83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
#2 0x00005585f659ad6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
#3 0x00005585f6595864 worker_thread (qemu-system-x86_64 + 0x739864)
#4 0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
#5 0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
#6 0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)
22:38:28 HOST audit[2728]: VIRT_CONTROL pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm op=stop reason=failed vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:28 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:560: Domain fc32(7, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 5, detail: 5)!
22:38:28 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:603: Removing domain 12345678-c061-4f3c-877a-8f429d0ac438
22:38:28 HOST kvmchand[73159]: [INFO] daemon/connections.c:381: Removed vchan between dom 7 (server) and dom 0, (client)
22:38:28 HOST systemd[1]: systemd-coredump@2-76207-0.service: Succeeded.
# Qubes - Restore guest images
22:38:28 HOST qubesd[73181]: Removed file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img
22:38:28 HOST qubesd[73181]: Reflinked file: /var/lib/qubes/vm-templates/fc32/root.img -> /var/lib/qubes/vm-templates/fc32/root.img.55@2020-09-24T01:32:42Z~xi94c9sd
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/private-dirty.img -> /var/lib/qubes/vm-templates/fc32/private.img
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/root.img.55@2020-09-24T01:32:42Z~xi94c9sd -> /var/lib/qubes/vm-templates/fc32/root.img.55@2020-09-24T01:32:42Z
22:38:28 HOST qubesd[73181]: Removed file: /var/lib/qubes/vm-templates/fc32/root.img.54@2020-09-24T01:32:26Z
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/root-dirty.img -> /var/lib/qubes/vm-templates/fc32/root.img
# --- GUEST RESTARTING ---------------------------------------------------------
22:38:28 HOST qubesd[73181]: Starting fc32
22:38:28 HOST qubesd[73181]: Created sparse file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img~9cgp4bs8
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img~9cgp4bs8 -> /var/lib/qubes/vm-templates/fc32/volatile-dirty.img
22:38:28 HOST qubesd[73181]: Reflinked file: /var/lib/qubes/vm-templates/fc32/private.img -> /var/lib/qubes/vm-templates/fc32/private-dirty.img~z690m1sb
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/private-dirty.img~z690m1sb -> /var/lib/qubes/vm-templates/fc32/private-dirty.img
22:38:28 HOST qubesd[73181]: Reflinked file: /var/lib/qubes/vm-templates/fc32/root.img -> /var/lib/qubes/vm-templates/fc32/root-dirty.img~s8_46lbr
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/root-dirty.img~s8_46lbr -> /var/lib/qubes/vm-templates/fc32/root-dirty.img
22:38:28 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:560: Domain fc32(-1, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 0, detail: 1)!
22:38:28 HOST kvmchand[73161]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
22:38:29 HOST systemd-machined[2431]: New machine qemu-8-fc32.
22:38:29 HOST systemd[1]: Started Virtual Machine qemu-8-fc32.
22:38:29 HOST audit: BPF prog-id=65 op=LOAD
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=deny vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=all exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=major category=pty maj=88 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/null" rdev=01:03 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/full" rdev=01:07 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/zero" rdev=01:05 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/random" rdev=01:08 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/urandom" rdev=01:09 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/ptmx" rdev=05:02 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/kvm" rdev=0A:E8 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/urandom" rdev=01:09 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-disk="?" new-disk="/var/lib/qubes/vm-templates/fc32/root-dirty.img" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-disk="?" new-disk="/var/lib/qubes/vm-templates/fc32/private-dirty.img" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-disk="?" new-disk="/var/lib/qubes/vm-templates/fc32/volatile-dirty.img" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-disk="?" new-disk="/var/lib/qubes/vm-kernels/5.6.16-1/modules.img" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=fs reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-fs="?" new-fs="/home/kvm/share" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=fs reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-fs="?" new-fs="/home/kvm/qubes/qubes-packages-mirror-repo/vm-fc32/rpm" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=net reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-net="?" new-net="52:54:00:10:34:4b" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=chardev reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-chardev="?" new-chardev="/dev/pts/29" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=chardev reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-chardev="?" new-chardev="/var/lib/libvirt/qemu/channel/target/domain-8-fc32/org.qemu.guest_agent.0" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=rng reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-rng="?" new-rng="/dev/urandom" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=mem reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-mem=0 new-mem=4096000 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=vcpu reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-vcpu=0 new-vcpu=2 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_CONTROL pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm op=start reason=booted vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 vm-pid=76323 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:560: Domain fc32(8, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 2, detail: 0)!
22:38:29 HOST qubesd[73181]: Setting Qubes DB info for the VM
22:38:29 HOST qubesd[73181]: Starting Qubes DB
22:38:29 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 0
22:38:29 HOST libvirtd[2728]: Domain id=8 name='fc32' uuid=12345678-c061-4f3c-877a-8f429d0ac438 is tainted: custom-monitor
22:38:29 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 76323
22:38:29 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:560: Domain fc32(8, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 3, detail: 0)!
22:38:29 HOST kvmchand[73161]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
22:38:29 HOST runuser[76341]: pam_unix(runuser:session): session opened for user user by (uid=0)
22:38:29 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:620: Client connected! fd: 13
22:38:29 HOST kvmchand[73159]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 8, port 111, read_min: 4096, write_min: 4096
22:38:29 HOST audit[73161]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 pid=73161 comm="kvmchand" exe="/usr/sbin/kvmchand" sig=6 res=1
22:38:29 HOST kvmchand[73161]: malloc_consolidate(): invalid chunk size
# --- KVMCHAND COREDUMP (after VM auto-restart) --------------------------------
22:38:29 HOST systemd[1]: Started Process Core Dump (PID 76346/UID 0).
22:38:29 HOST kvmchand[73159]: [ERROR] daemon/daemon.c:164: ERROR: Child unexpectedly died!
22:38:29 HOST libvirtd[2728]: End of file while reading data: Input/output error
22:38:29 HOST systemd[1]: kvmchand-host.service: Main process exited, code=exited, status=1/FAILURE
22:38:29 HOST systemd[1]: kvmchand-host.service: Failed with result 'exit-code'.
22:38:29 HOST systemd-coredump[76347]: Process 73161 (kvmchand) of user 0 dumped core.
Stack trace of thread 73164:
#0 0x00007fa53a1199e5 __GI_raise (libc.so.6 + 0x3c9e5)
#1 0x00007fa53a102895 __GI_abort (libc.so.6 + 0x25895)
#2 0x00007fa53a15d857 __libc_message (libc.so.6 + 0x80857)
#3 0x00007fa53a164d7c malloc_printerr (libc.so.6 + 0x87d7c)
#4 0x00007fa53a165c48 malloc_consolidate (libc.so.6 + 0x88c48)
#5 0x00007fa53a167a85 _int_malloc (libc.so.6 + 0x8aa85)
#6 0x00007fa53a1695f9 __GI___libc_malloc (libc.so.6 + 0x8c5f9)
#7 0x00007fa539b20a10 g_realloc (libglib-2.0.so.0 + 0x58a10)
#8 0x00007fa53a59f490 virReallocN (libvirt.so.0 + 0xac490)
#9 0x00007fa53a6fd392 virNetMessageEncodeHeader (libvirt.so.0 + 0x20a392)
#10 0x00007fa53a714fc9 virNetClientProgramCall (libvirt.so.0 + 0x221fc9)
#11 0x00007fa53a738e90 callFull (libvirt.so.0 + 0x245e90)
#12 0x00007fa53a73a19e remoteDomainLookupByUUID (libvirt.so.0 + 0x24719e)
#13 0x00007fa53a76856e virDomainLookupByUUID (libvirt.so.0 + 0x27556e)
#14 0x00007fa53a768777 virDomainLookupByUUIDString (libvirt.so.0 + 0x275777)
#15 0x0000000000403dd1 get_pid_by_domain_id (kvmchand + 0x3dd1)
#16 0x0000000000408aa2 response_dispatcher_thread (kvmchand + 0x8aa2)
#17 0x00007fa53a2b2432 start_thread (libpthread.so.0 + 0x9432)
#18 0x00007fa53a1de913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 73161:
#0 0x00007fa53a1d3aaf __GI___poll (libc.so.6 + 0xf6aaf)
#1 0x00007fa539b1aaae g_main_context_poll (libglib-2.0.so.0 + 0x52aae)
#2 0x00007fa539b1abe3 g_main_context_iteration (libglib-2.0.so.0 + 0x52be3)
#3 0x00007fa53a5c72b4 virEventGLibRunOnce (libvirt.so.0 + 0xd42b4)
#4 0x00000000004043e4 run_libvirt_loop (kvmchand + 0x43e4)
#5 0x0000000000402a6c host_main (kvmchand + 0x2a6c)
#6 0x00007fa53a104042 __libc_start_main (libc.so.6 + 0x27042)
#7 0x0000000000402c9e _start (kvmchand + 0x2c9e)
Stack trace of thread 73163:
#0 0x00007fa53a2bd6dd __libc_recvmsg (libpthread.so.0 + 0x146dd)
#1 0x0000000000408b3e ipcmsg_recv (kvmchand + 0x8b3e)
#2 0x0000000000408ccf client_receiver_thread (kvmchand + 0x8ccf)
#3 0x00007fa53a2b2432 start_thread (libpthread.so.0 + 0x9432)
#4 0x00007fa53a1de913 __clone (libc.so.6 + 0x101913)
Stack trace of thread 73162:
#0 0x00007fa53a2b8e92 futex_wait_cancelable (libpthread.so.0 + 0xfe92)
#1 0x00000000004095c6 dispatcher_thread (kvmchand + 0x95c6)
#2 0x00007fa53a2b2432 start_thread (libpthread.so.0 + 0x9432)
#3 0x00007fa53a1de913 __clone (libc.so.6 + 0x101913)
22:38:29 HOST systemd[1]: systemd-coredump@3-76346-0.service: Succeeded.
22:38:29 HOST audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@3-76346-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:30 HOST abrt-notification[76372]: Process 75117 (qemu-system-x86_64) crashed in qemu_ram_set_idstr.cold()
# systemd auto restart
22:38:30 HOST systemd[1]: kvmchand-host.service: Scheduled restart job, restart counter is at 1.
22:38:30 HOST audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=kvmchand-host comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:30 HOST audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=kvmchand-host comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:30 HOST systemd[1]: Stopped KVM vchan daemon.
22:38:30 HOST systemd[1]: Started KVM vchan daemon.
22:38:30 HOST audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=kvmchand-host comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:30 HOST kvmchand[76383]: [INFO] daemon/localhandler.c:580: guest_main: sd_notify
Backtrace for kvmchand after guest VM was auto started by Qubes. Not sure if it's relivent due to QEMU coredump.
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007fa53a102895 in __GI_abort () at abort.c:79
#2 0x00007fa53a15d857 in __libc_message (action=action@entry=do_abort,
fmt=fmt@entry=0x7fa53a26ee1d "%s\n") at ../sysdeps/posix/libc_fatal.c:155
#3 0x00007fa53a164d7c in malloc_printerr (
str=str@entry=0x7fa53a2710a0 "malloc_consolidate(): invalid chunk size") at malloc.c:5347
#4 0x00007fa53a165c48 in malloc_consolidate (av=av@entry=0x7fa528000020) at malloc.c:4477
#5 0x00007fa53a167a85 in _int_malloc (av=av@entry=0x7fa528000020, bytes=bytes@entry=65540)
at malloc.c:3699
#6 0x00007fa53a1695f9 in __GI___libc_malloc (bytes=65540) at malloc.c:3066
#7 0x00007fa539b20a10 in g_realloc (mem=0x0, n_bytes=n_bytes@entry=65540) at ../glib/gmem.c:167
#8 0x00007fa539b20bb0 in g_realloc_n (mem=<optimized out>, n_blocks=n_blocks@entry=1,
n_block_bytes=n_block_bytes@entry=65540) at ../glib/gmem.c:390
#9 0x00007fa53a59f490 in virReallocN (ptrptr=ptrptr@entry=0x7fa5280027e8, size=size@entry=1,
count=count@entry=65540) at ../../src/util/viralloc.c:91
#10 0x00007fa53a6fd392 in virNetMessageEncodeHeader (msg=msg@entry=0x7fa5280027e0)
at ../../src/rpc/virnetmessage.c:233
#11 0x00007fa53a714fc9 in virNetClientProgramCall (prog=prog@entry=0x238c2b0,
client=client@entry=0x238b820, serial=serial@entry=59, proc=proc@entry=24,
noutfds=noutfds@entry=0, outfds=outfds@entry=0x0, ninfds=0x0, infds=0x0,
args_filter=0x7fa53a7588c0 <xdr_remote_domain_lookup_by_uuid_args>, args=0x7fa5378a8810,
ret_filter=0x7fa53a7588e0 <xdr_remote_domain_lookup_by_uuid_ret>, ret=0x7fa5378a8820)
at ../../src/rpc/virnetclientprogram.c:314
#12 0x00007fa53a738e90 in callFull (conn=<optimized out>, ret=0x7fa5378a8820 "",
ret_filter=0x7fa53a7588e0 <xdr_remote_domain_lookup_by_uuid_ret>,
args=0x7fa5378a8810 "\022\064Vx\300aO<\207z\217B\235\n\304\070",
args_filter=0x7fa53a7588c0 <xdr_remote_domain_lookup_by_uuid_args>, proc_nr=24, fdoutlen=0x0,
fdout=0x0, fdinlen=0, fdin=0x0, flags=flags@entry=0, priv=0x238fc00,
priv@entry=0x7fa53a7588e0 <xdr_remote_domain_lookup_by_uuid_ret>)
at ../../src/remote/remote_driver.c:6754
#13 call (priv=priv@entry=0x238fc00, flags=flags@entry=0, proc_nr=proc_nr@entry=24,
args_filter=0x7fa53a7588c0 <xdr_remote_domain_lookup_by_uuid_args>,
args=args@entry=0x7fa5378a8810 "\022\064Vx\300aO<\207z\217B\235\n\304\070",
ret_filter=ret_filter@entry=0x7fa53a7588e0 <xdr_remote_domain_lookup_by_uuid_ret>,
ret=0x7fa5378a8820 "", conn=0x238ec00) at ../../src/remote/remote_driver.c:6776
#14 0x00007fa53a73a19e in remoteDomainLookupByUUID (conn=0x238ec00,
uuid=0x7fa5378a8900 "\022\064Vx\300aO<\207z\217B\235\n\304\070")
at ./remote/remote_client_bodies.h:3610
#15 0x00007fa53a76856e in virDomainLookupByUUID (conn=0x238ec00,
uuid=0x7fa5378a8900 "\022\064Vx\300aO<\207z\217B\235\n\304\070") at ../../src/libvirt-domain.c:346
#16 0x00007fa53a768777 in virDomainLookupByUUIDString (conn=0x238ec00, uuidstr=0x2390b50 "12345678-c061-4f3c-877a-8f429d0ac438") at ../../src/libvirt-domain.c:389
#17 0x0000000000403dd1 in get_pid_by_domain_id (pid_out=<synthetic pointer>, id=<optimized out>) at daemon/libvirt.c:121
#18 handle_ipc_message (msg=0x7fa530000da0) at daemon/libvirt.c:681
#19 0x0000000000408aa2 in response_dispatcher_thread (rdata_=0x414630 <g_ipc_data+144>) at daemon/ipc.c:392
#20 0x00007fa53a2b2432 in start_thread (arg=<optimized out>) at pthread_create.c:477
#21 0x00007fa53a1de913 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Sorry for the delay. Have you managed to figure anything out or work around this bug? If not I have an idea I could try to implement that would likely work around it.
I've pushed https://github.com/shawnanastasio/libkvmchan/commit/09cd616f216e7844c0159458c88c5a49b7c8a46c which modifies the daemon's vchan allocation behavior to be much less conservative with disconnecting vchans (it now tries to reuse them instead). Eventually this may end up wasting memory and will have to be reigned in, but it should result in significantly fewer hotplugs which may prevent this issue with QEMU from being hit.
Let me know if you get the chance to test it.
Thanks. I like the idea of reusing the the vchans since it takes so long to set them up. Looking at the log it seems to take over 10 seconds and that's on a fast CPU with lots of memory.
I have had to focus my efforts catching up on a few things but I will have a chance to resume development by the weekend or shortly aftwerwards and have also set aside most of the month of December to hopefully finish the remainder of tasks to get Qubes completely functional within KVM. I currently don't have the Qubes/KVM environment loaded on my development machine but will test the patch soon.
Glad to hear - let me know if you run into any issues or have any breakthroughs :smile:
After implementing the
qubes-gui-agent
configuration, scripts and executables within the VM, the virtual machine crashes during certain tasks (repeatable). I am not sure if its an issue with libkvmchand, QEMU, or maybe some misconfiguration on the host, but was hoping you have come across this beofre or would be able to provide some insight.The virtual machine crashes after re-installing some packages (and some other scenarios) which always occur when executing a some, not all,
qrexec
processes. The system receives SIGNAL 6, which I understand indicates a memory violation and forcesqemu-system-x86_64
to immediately terminate.The crash occurs once the packages are re-installed, when the DNF helper calls
qubes.NotifyUpdates
followed bySyncAppMenus
.Backtrace
Logs