shawnanastasio / libkvmchan

An implementation of the Xen vchan API on KVM
Other
10 stars 4 forks source link

Add systemd sd_notify() #19

Closed nrgaway closed 4 years ago

nrgaway commented 4 years ago

A systemd unit file that has the 'Type' set to 'notify' enables the ability for systemd to notify any unit that 'Requires' it when a call is made to sd_nofify(). The unit file that requires to be notified will not start until systemd recieves a 'READY=1' message which allows services that rely on another service to be fully initialized to wait preventing race conditions.

The kvmchand service unit file service 'Type' was changed from 'simple' to 'notify' and dropins were added for qubes-db.service and qubes-qrexec.service to Require the kvmchand.service unit to force them to wait until kvmchan initialization is complete since the services were starting too early which caused fatal communication errors that could not recover.

shawnanastasio commented 4 years ago

Could you explain what this is used for? From what I can see it's used to notify systemd that the service has started up successfully, but shouldn't it assume that anyways if the daemon doesn't crash? Is it used to provide an event that other systemd services can listen for/act upon?

shawnanastasio commented 4 years ago

Never mind, I just saw your explanation in #12. Is there a particular reason this code is in vfio.c? If the issue is clients failing to communicate with kvmchand, then this should probably belong in localhandler.c after the socket has been initialized, probably somewhere around here.

nrgaway commented 4 years ago

It is used for the VM to notify qubes-db when it is initialized. Systemd will not start the qubes-db or qrexec service until the notify ready has been sent by the libkvmchan daemon since I changed systemd unit service file type from 'simple' to 'notify' and added 'Requires: kvmchand.service' to both qubes-db and qrexec unit files.

The first thing I tried was putting the notify within localhandler.c; tried a few spots without success. I just tried moving it to the location you suggested and had the same or similar issues I had before. For some reason it seemed to work well in the vfio code; both qubes-db and qrexec start, where qrexec does not start when placed within localhandler which could be related to the other issue where I needed to have qrexec keep retrying on the host side. I don't quite understand the logistics of the protocol. The qubes-db host socket needs to be in place before VM can connect, and qrexec service in reverse where the VM socket needs to be created before the host can connect.

I included logs for host and vm that includes a stack trace (where sd_notify was moved to line 574 within localhander). Maybe it will shed some insight into the issue.

HOST LOG
18:08:47 HOST kvmchand[46483]: [INFO] daemon/localhandler.c:580: guest_main: sd_notify
18:09:08 HOST kvmchand[46477]: [INFO] daemon/libvirt.c:560: Domain fedora-32(-1, UUID: 095c78db-c061-4f3c-877a-8f429d0ac438) changed state (event: 0, detail: 1)!
18:09:08 HOST kvmchand[46477]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
18:09:08 HOST kvmchand[46477]: [INFO] daemon/libvirt.c:560: Domain fedora-32(16, UUID: 095c78db-c061-4f3c-877a-8f429d0ac438) changed state (event: 2, detail: 0)!
18:09:08 HOST kvmchand[46477]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 0
18:09:08 HOST kvmchand[46479]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 46518
18:09:08 HOST kvmchand[46477]: [INFO] daemon/libvirt.c:560: Domain fedora-32(16, UUID: 095c78db-c061-4f3c-877a-8f429d0ac438) changed state (event: 3, detail: 0)!
18:09:08 HOST kvmchand[46477]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
18:09:08 HOST kvmchand[46483]: [INFO] daemon/localhandler.c:620: Client connected! fd: 10
18:09:08 HOST kvmchand[46474]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 16, port 111, read_min: 4096, write_min: 4096
18:09:08 HOST kvmchand[46477]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 2
18:09:08 HOST kvmchand[46479]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 46518
18:09:08 HOST kvmchand[46477]: [INFO] daemon/libvirt.c:560: Domain fedora-32(16, UUID: 095c78db-c061-4f3c-877a-8f429d0ac438) changed state (event: 4, detail: 0)!
18:09:08 HOST kvmchand[46477]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
18:09:08 HOST kvmchand[46483]: [INFO] daemon/localhandler.c:620: Client connected! fd: 11
18:09:08 HOST kvmchand[46474]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 16 port 512
18:09:09 HOST kvmchand[46474]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 16 port 512
... 56 more identical entries (1 per second for 60 seconds)
18:10:07 HOST kvmchand[46474]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 16 port 512
18:10:08 HOST kvmchand[46474]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 16 port 512
18:10:08 HOST kvmchand[46483]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 11
VM LOG
18:09:13 VM   systemd[1]: Starting KVM vchan daemon...
18:09:13 VM   kvmchand[520]: [INFO] daemon/localhandler.c:580: guest_main: sd_notify
18:09:13 VM   kvmchand[519]: [INFO] daemon/vfio.c:1189: Got ivshmem device: 0000:00:10.0
18:09:13 VM   kvmchand[519]: [INFO] daemon/vfio.c:1189: Got ivshmem device: 0000:00:11.0
18:09:13 VM   kvmchand[519]: [WARN] daemon/vfio.c:1205: Some ivshmem devices aren't bound to vfio-pci. Attempting to bind...
18:09:13 VM   systemd[1]: Started KVM vchan daemon.
18:09:13 VM   kernel: vfio-pci 0000:00:10.0: Adding to iommu group 0
18:09:13 VM   kernel: vfio-pci 0000:00:10.0: Adding kernel taint for vfio-noiommu group on device
18:09:13 VM   kernel: vfio-pci 0000:00:11.0: Adding to iommu group 1
18:09:13 VM   kernel: vfio-pci 0000:00:11.0: Adding kernel taint for vfio-noiommu group on device
18:09:13 VM   kvmchand[519]: [WARN] daemon/vfio.c:1220: Successfully bound ivshmem devices.
18:09:13 VM   systemd[1]: Starting Qubes DB agent...
18:09:13 VM   kernel: vfio-pci 0000:00:10.0: vfio-noiommu device opened by user (kvmchand:519)
18:09:13 VM   kernel: vfio-pci 0000:00:11.0: vfio-noiommu device opened by user (kvmchand:519)
18:09:13 VM   kvmchand[520]: [INFO] daemon/localhandler.c:620: Client connected! fd: 8
18:09:13 VM   kernel: kvmchand[528]: segfault at 30 ip 0000000000406aa2 sp 00007fecf1ab59f0 error 4 in kvmchand[402000+c000]
18:09:13 VM   kernel: Code: e9 be fe ff ff 48 8d 0d 62 7f 00 00 ba 22 02 00 00 e9 f7 f7 ff ff 66 90 41 54 49 89 f0 b9 28 00 00 00 49 89 d4 55 4c 89 c2 53 <48> 8b 6
f 08 48 89 fb 48 83 c7 58 48 8d 75 20 e8 da 6b 00 00 48 8d
18:09:13 VM   systemd[1]: Created slice system-systemd\x2dcoredump.slice.
18:10:43 VM   systemd[1]: qubes-db.service: start operation timed out. Terminating.
18:12:13 VM   systemd[1]: qubes-db.service: State 'stop-sigterm' timed out. Killing.
18:12:13 VM   systemd[1]: qubes-db.service: Killing process 530 (qubesdb-daemon) with signal SIGKILL.
18:12:13 VM   systemd[1]: qubes-db.service: Main process exited, code=killed, status=9/KILL
18:12:13 VM   systemd[1]: qubes-db.service: Failed with result 'timeout'.
18:12:13 VM   systemd[1]: Failed to start Qubes DB agent.
18:12:13 VM   systemd[1]: Starting Home Area Manager...
18:12:13 VM   qrexec-agent[952]: 2020-08-21 14:12:13.820 qrexec-agent[952]: qrexec-agent.c:376:init: qrexec-agent.init: ctrl_vchan call
18:12:13 VM   kvmchand[502]: [ERROR] daemon/daemon.c:164: ERROR: Child unexpectedly died!
18:12:13 VM   qrexec-agent[952]: 2020-08-21 14:12:13.889 qrexec-agent[952]: qrexec-agent.c:378:init: qrexec-agent.init: ctrl_vchan returned
18:12:13 VM   qrexec-agent[952]: 2020-08-21 14:12:13.889 qrexec-agent[952]: qrexec-agent.c:332:handle_vchan_error: Error while vchan server_init, exiting
18:12:13 VM   systemd[1]: qubes-qrexec-agent.service: Main process exited, code=exited, status=1/FAILURE
18:12:13 VM   systemd[1]: qubes-qrexec-agent.service: Failed with result 'exit-code'.
18:12:13 VM   systemd[1]: Failed to start Qubes remote exec agent.
18:12:13 VM   systemd[1]: kvmchand.service: Main process exited, code=exited, status=1/FAILURE
18:12:13 VM   systemd[1]: kvmchand.service: Failed with result 'exit-code'.
18:12:13 VM   systemd-coredump[939]: Process 519 (kvmchand) of user 0 dumped core.
              Stack trace of thread 528:
              #0  0x0000000000406aa2 kvmchand_send_message (kvmchand + 0x6aa2)
              #1  0x0000000000407a65 handle_ipc_message (kvmchand + 0x7a65)
              #2  0x0000000000408ac2 response_dispatcher_thread (kvmchand + 0x8ac2)
              #3  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
              #4  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

              Stack trace of thread 527:
              #0  0x00007fecf44626dd recvmsg (libpthread.so.0 + 0x146dd)
              #1  0x0000000000408b5e ipcmsg_recv.constprop.0 (kvmchand + 0x8b5e)
              #2  0x0000000000408cef client_receiver_thread (kvmchand + 0x8cef)
              #3  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
              #4  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

              Stack trace of thread 526:
              #0  0x00007fecf445de92 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xfe92)
              #1  0x00000000004095e6 dispatcher_thread (kvmchand + 0x95e6)
              #2  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
              #3  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

              Stack trace of thread 519:
              #0  0x00007fecf437e526 __mmap (libc.so.6 + 0xfc526)
              #1  0x00000000004074ae vfio_conn_find_by_ivposition (kvmchand + 0x74ae)
              #2  0x00000000004080dc run_vfio_loop (kvmchand + 0x80dc)
              #3  0x0000000000402902 main (kvmchand + 0x2902)
              #4  0x00007fecf42a9042 __libc_start_main (libc.so.6 + 0x27042)
              #5  0x0000000000402cce _start (kvmchand + 0x2cce)

              Stack trace of thread 531:
              #0  0x00007fecf4383c5e epoll_wait (libc.so.6 + 0x101c5e)
              #1  0x0000000000406ece outgoing_thread (kvmchand + 0x6ece)
              #2  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
              #3  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)
18:12:13 VM   systemd[1]: Finished Qubes misc post-boot actions.
18:12:13 VM   kernel: fbcon: Taking over console
18:12:13 VM   systemd[1]: systemd-coredump@0-533-0.service: Succeeded.
shawnanastasio commented 4 years ago

18:09:13 VM kernel: kvmchand[528]: segfault at 30 ip 0000000000406aa2 sp 00007fecf1ab59f0 error 4 in kvmchand[402000+c000]

Ouch, it looks like kvmchand is segfaulting in the guest under those circumstances which is probably why everything is failing.

If you still have the binary that was crashing, could you run addr2line -e kvmchand 406aa2 on it to see which line the issue was at (or substitute the address for whatever shows up in your kernel log's segfault message)?

nrgaway commented 4 years ago

Sure.

addr2line

#addr2line -e kvmchand 0x406aa2
/usr/src/debug/libkvmchan-4.1.0-1.fc32.x86_64/libkvmchan/daemon/vfio.c:719

Backtrace

# coredumpctl gdb /usr/sbin/kvmchand 
           PID: 519 (kvmchand)
           UID: 0 (root)
           GID: 0 (root)
        Signal: 11 (SEGV)
     Timestamp: Fri 2020-08-21 14:09:13 EDT (16h ago)
  Command Line: /usr/sbin/kvmchand -g
    Executable: /usr/sbin/kvmchand
 Control Group: /system.slice/kvmchand.service
          Unit: kvmchand.service
         Slice: system.slice
       Boot ID: 29fc6d2c2e3241a5a0e6479ea62fd715
    Machine ID: f8adb3da5c4244f7bb92f3107fe8bf5d
      Hostname: localhost
       Storage: /var/lib/systemd/coredump/core.kvmchand.0.29fc6d2c2e3241a5a0e6479ea62fd715.519.1598033353000000000000.lz4
       Message: Process 519 (kvmchand) of user 0 dumped core.

                Stack trace of thread 528:
                #0  0x0000000000406aa2 kvmchand_send_message (kvmchand + 0x6aa2)
                #1  0x0000000000407a65 handle_ipc_message (kvmchand + 0x7a65)
                #2  0x0000000000408ac2 response_dispatcher_thread (kvmchand + 0x8ac2)
                #3  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
                #4  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

                Stack trace of thread 527:
                #0  0x00007fecf44626dd recvmsg (libpthread.so.0 + 0x146dd)
                #1  0x0000000000408b5e ipcmsg_recv.constprop.0 (kvmchand + 0x8b5e)
                #2  0x0000000000408cef client_receiver_thread (kvmchand + 0x8cef)
                #3  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
                #4  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

                Stack trace of thread 526:
                #0  0x00007fecf445de92 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xfe92)
                #1  0x00000000004095e6 dispatcher_thread (kvmchand + 0x95e6)
                #2  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
                #3  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

                Stack trace of thread 519:
                #0  0x00007fecf437e526 __mmap (libc.so.6 + 0xfc526)
                #1  0x00000000004074ae vfio_conn_find_by_ivposition (kvmchand + 0x74ae)
                #2  0x00000000004080dc run_vfio_loop (kvmchand + 0x80dc)
                #3  0x0000000000402902 main (kvmchand + 0x2902)
                #4  0x00007fecf42a9042 __libc_start_main (libc.so.6 + 0x27042)
                #5  0x0000000000402cce _start (kvmchand + 0x2cce)

                Stack trace of thread 531:
                #0  0x00007fecf4383c5e epoll_wait (libc.so.6 + 0x101c5e)
                #1  0x0000000000406ece outgoing_thread (kvmchand + 0x6ece)
                #2  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
                #3  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

GNU gdb (GDB) Fedora 9.1-5.fc32
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/kvmchand...
Reading symbols from /usr/lib/debug/usr/sbin/kvmchand-4.1.0-1.fc32.x86_64.debug...
[New LWP 528]
[New LWP 527]
[New LWP 526]
[New LWP 519]
[New LWP 531]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Missing separate debuginfos, use: dnf debuginfo-install audit-libs-3.0-0.19.20191104git1c2f876.fc32.x86_64 cyrus-sasl-lib-2.1.27-4.fc32.x86_64 dbus-libs-1.12.20-1.fc32.x86_64 
device-mapper-libs-1.02.171-1.fc32.x86_64 glib2-2.64.5-1.fc32.x86_64 glibc-2.31-4.fc32.x86_64 gmp-6.1.2-13.fc32.x86_64 gnutls-3.6.14-2.fc32.x86_64 keyutils-libs-1.6-4.fc32.x86
_64 krb5-libs-1.18.2-20.fc32.x86_64 libacl-2.2.53-5.fc32.x86_64 libattr-2.4.48-8.fc32.x86_64 libcap-ng-0.7.10-2.fc32.x86_64 libcom_err-1.45.5-3.fc32.x86_64 libcurl-minimal-7.6
9.1-5.fc32.x86_64 libffi-3.1-24.fc32.x86_64 libgcc-10.2.1-1.fc32.x86_64 libgcrypt-1.8.5-3.fc32.x86_64 libgpg-error-1.36-3.fc32.x86_64 libidn2-2.3.0-2.fc32.x86_64 libmount-2.35
.2-1.fc32.x86_64 libnghttp2-1.41.0-1.fc32.x86_64 libnl3-3.5.0-2.fc32.x86_64 libselinux-3.0-5.fc32.x86_64 libssh-0.9.4-3.fc32.x86_64 libssh2-1.9.0-5.fc32.x86_64 libtasn1-4.16.0
-1.fc32.x86_64 libtirpc-1.2.6-1.rc4.fc32.x86_64 libunistring-0.9.10-7.fc32.x86_64 libvirt-libs-6.1.0-4.fc32.x86_64 libwsman1-2.6.8-12.fc32.x86_64 libxcrypt-4.4.16-3.fc32.x86_6
4 libxml2-2.9.10-3.fc32.x86_64 lz4-libs-1.9.1-2.fc32.x86_64 nettle-3.5.1-5.fc32.x86_64 numactl-libs-2.0.12-4.fc32.x86_64 openssl-libs-1.1.1g-1.fc32.x86_64 p11-kit-0.23.20-1.fc
32.x86_64 pcre-8.44-1.fc32.x86_64 pcre2-10.35-4.fc32.x86_64 systemd-libs-245.7-1.fc32.x86_64 xz-libs-5.2.5-1.fc32.x86_64 yajl-2.1.0-14.fc32.x86_64 zlib-1.2.11-21.fc32.x86_64
--Type <RET> for more, q to quit, c to continue without paging--c
Core was generated by `/usr/sbin/kvmchand -g'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x0000000000406aa2 in kvmchand_send_message (conn=0x28, msg=msg@entry=0x7fecf1ab5a20, response=response@entry=0x7fecf1ab5a10) at daemon/vfio.c:719
719     shmem_hdr_t *hdr = conn->shared;
[Current thread is 1 (Thread 0x7fecf1ab6700 (LWP 528))]
(gdb) bt
#0  0x0000000000406aa2 in kvmchand_send_message (conn=0x28, 
    msg=msg@entry=0x7fecf1ab5a20, response=response@entry=0x7fecf1ab5a10)
    at daemon/vfio.c:719
#1  0x0000000000407a65 in handle_ipc_message (msg=0x7fecec000b60)
    at daemon/vfio.c:1100
#2  0x0000000000408ac2 in response_dispatcher_thread (
    rdata_=0x4145b0 <g_ipc_data+144>) at daemon/ipc.c:392
#3  0x00007fecf4457432 in start_thread () from /lib64/libpthread.so.0
#4  0x00007fecf4383913 in clone () from /lib64/libc.so.6
nrgaway commented 4 years ago

Opps! Noticed some debug symbols missing..

           PID: 519 (kvmchand)
           UID: 0 (root)
           GID: 0 (root)
        Signal: 11 (SEGV)
     Timestamp: Fri 2020-08-21 14:09:13 EDT (17h ago)
  Command Line: /usr/sbin/kvmchand -g
    Executable: /usr/sbin/kvmchand
 Control Group: /system.slice/kvmchand.service
          Unit: kvmchand.service
         Slice: system.slice
       Boot ID: 29fc6d2c2e3241a5a0e6479ea62fd715
    Machine ID: f8adb3da5c4244f7bb92f3107fe8bf5d
      Hostname: localhost
       Storage: /var/lib/systemd/coredump/core.kvmchand.0.29fc6d2c2e3241a5a0e6479ea62fd715.519.1598033353000000000000.lz4
       Message: Process 519 (kvmchand) of user 0 dumped core.

                Stack trace of thread 528:
                #0  0x0000000000406aa2 kvmchand_send_message (kvmchand + 0x6aa2)
                #1  0x0000000000407a65 handle_ipc_message (kvmchand + 0x7a65)
                #2  0x0000000000408ac2 response_dispatcher_thread (kvmchand + 0x8ac2)
                #3  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
                #4  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

                Stack trace of thread 527:
                #0  0x00007fecf44626dd recvmsg (libpthread.so.0 + 0x146dd)
                #1  0x0000000000408b5e ipcmsg_recv.constprop.0 (kvmchand + 0x8b5e)
                #2  0x0000000000408cef client_receiver_thread (kvmchand + 0x8cef)
                #3  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
                #4  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

                Stack trace of thread 526:
                #0  0x00007fecf445de92 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xfe92)
                #1  0x00000000004095e6 dispatcher_thread (kvmchand + 0x95e6)
                #2  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
                #3  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

                Stack trace of thread 519:
                #0  0x00007fecf437e526 __mmap (libc.so.6 + 0xfc526)
                #1  0x00000000004074ae vfio_conn_find_by_ivposition (kvmchand + 0x74ae)
                #2  0x00000000004080dc run_vfio_loop (kvmchand + 0x80dc)
                #3  0x0000000000402902 main (kvmchand + 0x2902)
                #4  0x00007fecf42a9042 __libc_start_main (libc.so.6 + 0x27042)
                #5  0x0000000000402cce _start (kvmchand + 0x2cce)

                Stack trace of thread 531:
                #0  0x00007fecf4383c5e epoll_wait (libc.so.6 + 0x101c5e)
                #1  0x0000000000406ece outgoing_thread (kvmchand + 0x6ece)
                #2  0x00007fecf4457432 start_thread (libpthread.so.0 + 0x9432)
                #3  0x00007fecf4383913 __clone (libc.so.6 + 0x101913)

GNU gdb (GDB) Fedora 9.1-5.fc32
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/kvmchand...
Reading symbols from /usr/lib/debug/usr/sbin/kvmchand-4.1.0-1.fc32.x86_64.debug...
[New LWP 528]
[New LWP 527]
[New LWP 526]
[New LWP 519]
[New LWP 531]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
--Type <RET> for more, q to quit, c to continue without paging--c
Core was generated by `/usr/sbin/kvmchand -g'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x0000000000406aa2 in kvmchand_send_message (conn=0x28, msg=msg@entry=0x7fecf1ab5a20, response=response@entry=0x7fecf1ab5a10) at daemon/vfio.c:719
719     shmem_hdr_t *hdr = conn->shared;
[Current thread is 1 (Thread 0x7fecf1ab6700 (LWP 528))]
(gdb) bt
#0  0x0000000000406aa2 in kvmchand_send_message (conn=0x28, 
    msg=msg@entry=0x7fecf1ab5a20, response=response@entry=0x7fecf1ab5a10)
    at daemon/vfio.c:719
#1  0x0000000000407a65 in handle_ipc_message (msg=0x7fecec000b60)
    at daemon/vfio.c:1100
#2  0x0000000000408ac2 in response_dispatcher_thread (
    rdata_=0x4145b0 <g_ipc_data+144>) at daemon/ipc.c:392
#3  0x00007fecf4457432 in start_thread (arg=<optimized out>)
    at pthread_create.c:477
#4  0x00007fecf4383913 in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
shawnanastasio commented 4 years ago

Interesting, it looks like you're hitting a race between vfio initialization and IPC command execution. I'll whip up a patch for you to test shortly.

shawnanastasio commented 4 years ago

When you get the chance, could you try out https://github.com/shawnanastasio/libkvmchan/commit/7517cd8d6a555c0c03458b4bd00d12dc896991f3 and see if it fixes the crashes?

nrgaway commented 4 years ago

When you get the chance, could you try out 7517cd8 and see if it fixes the crashes?

Thanks, seems to be working! It may have solved some other issues as well since I now see 4-5 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01) PCI devices when only 2 appeared before the patch.

I also changed the PR to reflect the move to locahandler.c

I noticed an unrelated segmentation fault on the host side. It's too late to debug it but will look into it more tomorrow and create an issue with stack trace.

nrgaway commented 4 years ago

Well, I ended up staying up late and could not get kvmchand to produce another segmentation fault, which I guess is a good thing :) Got qrexec fully working now though along with qubes-db. It was kinda neat using qvm-run to transfer a file from the VM to host.

shawnanastasio commented 4 years ago

Awesome! Have you tried everything without this sd_notify code? Now that the daemon can gracefully handle early VFIO commands without crashing, my guess is that sd_notify shouldn't be needed at all.

nrgaway commented 4 years ago

I apologize for the late response; I have been sick for the last week :(

I tested with _sdnotify disabled and while qubes-db and qrexec do end up connecting, the system starts in emergency mode and I have had failures when attempting to run qvm-run from the host. Until further investigation I am guessing it's due to other services that depend on either qubes-db or qrexec being fully initialized as well.

At the moment, I prefer to use _sdnotify to ensure kvmchand is fully initialized before any other Qubes modules attempt to load which should prevent any race conditions. If you prefer not to have the systemd notify built-in by default or within libkvmchan we can either change the Makefile SYSTEMD option from SYSTEMD ?= 1 to SYSTEMD ?= 0 or I can just apply a patch at build time.

To provide a better understanding of how kvmchand fits into the boot sequence I have provided an outline below.

The kvmchand service is wanted-by the systemd sysinit.target, which means it runs early. It also contains an ExecStartPre statement which makes sure VIFO modulus are loaded, enable_unsafe_noiommu_mode is enabled and sets a flag to indicate the system is running within a KVM guest. FYI, The qubes-db service type is also 'notify' and is also wanted by sysinit.target. Note that I plan to move the code that sets the hypervisor type to a systemd unit file named hypervisor.service which will call a script to set the hypervisor type (IE: touch /var/run/qubes-service/hypervisor-{kvm|xen}).

My goal is to be able to build a single version of Qubes that use the same packages for both Xen and KVM. For instance, the same Qubes virtual machine template image could be used within the current Qubes Xen hypervisor environment or copied to a KVM host and used unmodified.

To achieve this goal I have created systemd drop-ins for KVM that include a condition ConditionPathExists=/var/run/qubes-service/hypervisor-kvm. As it turns out this logic appears flawed though. I thought adding a drop-in based on a condition would prevent the drop-in from being applied if the hypervior-kvm file did not exist and the main unit file would still run for Xen based systems, but it seems that the drop-ins just get merged on top of the main systemd unit file and since the condition is false on a Xen based system, the unit will not run.

Since it does not seem I can add systemd drop-ins to be applied conditionally, I have focused my efforts on using a systemd generator to dynamicly add the KVM drop-ins when a KVM hypervisor is detected. The code is written and the drop-ins get created within the /var/run/systemd/generator.early directory but do not get applied during boot, but do get applied when re-starting a service, so I need to do further research.

For completeness I have also included samples of the current configuration files and scripts below.

kvmchand.service

[Unit]
Description=KVM vchan daemon
DefaultDependencies=no
After=systemd-modules-load.service fedora-loadmodules.service

[Service]
Type=notify
ExecStartPre=/usr/lib/qubes/qubes-kvm
ExecStart=/usr/sbin/kvmchand -g
KillMode=process
StandardOutput=syslog
NotifyAccess=all

[Install]
WantedBy=sysinit.target

/usr/lib/qubes/qubes-kvm

#!/usr/bin/sh

# Ensure vfio-pci modules loaded and 'enable_unsafe_noiommu_mode' is enabled.
/usr/sbin/modprobe vfio-pci || true
/usr/bin/sh -c "echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode"

## XXX: Might not be required any more since it is created by qubes-db.service
## which now waits unitl notified before starting.
# Ensure the '/var/run/qubes' directory exists and permissions are set correctly
mkdir -p /var/run/qubes
chmod 2770 /var/run/qubes
chgrp qubes /var/run/qubes

# Might not be required
mkdir -p /var/log/qubes

# Need to create the 'qubes-service' directory early since it is normally
# created by '/usr/lib/qubes/init/qubes-sysinit.sh' executed from the
# qubes-sysinit.service after being notified qubes-db.service has been started,
# and the qubes-db.service starts after the kvmchand.service.
mkdir -p /var/run/qubes-service
touch /var/run/qubes-service/hypervisor-kvm

qubes-db drop-in

[Unit]
ConditionPathExists=/var/run/qubes-service/hypervisor-kvm
After=kvmchand.service
Requires=kvmchand.service
nrgaway commented 4 years ago

Following are the kvmchand parts of a guest log when booting without sd_notify built into kvmchand.

You are in emergency mode. After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, "systemctl default" or "exit"
to boot into default mode.

Cannot open access to console, the root account is locked.
See sulogin(8) man page for more details.

#Aug 23 22:33:28 localhost systemd-tmpfiles[476]: /usr/lib/tmpfiles.d/qubes-core-agent-linux.conf:1: Line references path below legacy directory /var/run/, updating /var/run/tinyproxy-updates → /run/tinyproxy-updates; please update the tmpfiles.d/ drop-in file accordingly.
#Aug 23 22:33:28 localhost systemd-tmpfiles[476]: /etc/tmpfiles.d/qubes-gpg-split.conf:1: Line references path below legacy directory /var/run/, updating /var/run/qubes-gpg-split → /run/qubes-gpg-split; please update the tmpfiles.d/ drop-in file accordingly.
Aug 23 22:33:29 localhost qubesdb-daemon[493]: FATAL: vchan initialization failed
Aug 23 22:33:29 localhost systemd[1]: Starting KVM vchan daemon...
Aug 23 22:33:29 localhost systemd[1]: Started KVM vchan daemon.
Aug 23 22:33:29 localhost systemd[1]: Starting Qubes DB agent...
Aug 23 22:33:29 localhost systemd[1]: qubes-db.service: Main process exited, code=exited, status=1/FAILURE
Aug 23 22:33:29 localhost systemd[1]: qubes-db.service: Failed with result 'exit-code'.
Aug 23 22:33:29 localhost systemd[1]: Failed to start Qubes DB agent.
#Aug 23 22:33:29 localhost systemd[1]: Starting Init Qubes Services settings...
Aug 23 22:33:29 localhost kvmchand[506]: [INFO] daemon/localhandler.c:580: guest_main: sd_notify

## 1st RAM MEMORY DEVICE
## 00:10.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)
## 00:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
Aug 23 22:33:29 localhost kvmchand[505]: [INFO] daemon/vfio.c:1205: Got ivshmem device: 0000:00:10.0
########

## 2nd RAM MEMORY DEVICE
## 00:11.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)
## 00:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
Aug 23 22:33:29 localhost kvmchand[505]: [INFO] daemon/vfio.c:1205: Got ivshmem device: 0000:00:11.0
########

Aug 23 22:33:29 localhost kvmchand[505]: [WARN] daemon/vfio.c:1221: Some ivshmem devices aren't bound to vfio-pci. Attempting to bind...
Aug 23 22:33:29 localhost kernel: vfio-pci 0000:00:10.0: Adding to iommu group 0  ## 1st RAM MEMORY DEVICE
Aug 23 22:33:29 localhost kernel: vfio-pci 0000:00:10.0: Adding kernel taint for vfio-noiommu group on device  ## 1st RAM MEMORY DEVICE
Aug 23 22:33:29 localhost kernel: vfio-pci 0000:00:11.0: Adding to iommu group 1  ## 2nd RAM MEMORY DEVICE
Aug 23 22:33:29 localhost kernel: vfio-pci 0000:00:11.0: Adding kernel taint for vfio-noiommu group on device  ## 2nd RAM MEMORY DEVICE
Aug 23 22:33:29 localhost kvmchand[505]: [WARN] daemon/vfio.c:1236: Successfully bound ivshmem devices.
Aug 23 22:33:29 localhost kernel: vfio-pci 0000:00:10.0: vfio-noiommu device opened by user (kvmchand:505)  ## 1st RAM MEMORY DEVICE
Aug 23 22:33:29 localhost kernel: vfio-pci 0000:00:11.0: vfio-noiommu device opened by user (kvmchand:505)  ## 2nd RAM MEMORY DEVICE
Aug 23 22:33:29 localhost kvmchand[505]: [INFO] daemon/vfio.c:993: Successfully connected to host daemon.
#Aug 23 22:33:29 localhost qubes-sysinit.sh[565]: Failed connect to local daemon
#Aug 23 22:33:29 localhost qubes-sysinit.sh[567]: Failed connect to local daemon
#Aug 23 22:33:29 localhost qubes-sysinit.sh[569]: Failed connect to local daemon
#Aug 23 22:33:29 localhost qubes-sysinit.sh[571]: Failed connect to local daemon
#Aug 23 22:33:29 localhost qubes-random-seed.sh[579]: Failed connect to local daemon
#Aug 23 22:33:29 localhost systemd[1]: Finished Init Qubes Services settings.
#Aug 23 22:33:29 localhost systemd[1]: Starting Initialize and mount /rw and /home...
#Aug 23 22:33:29 localhost systemd[1]: Starting Adjust root filesystem size...
#Aug 23 22:33:29 localhost systemd[1]: Finished Initialize and mount /rw and /home.
#Aug 23 22:33:29 localhost systemd[1]: Finished Adjust root filesystem size.
Aug 23 22:33:29 localhost systemd[1]: Started Emergency Shell.
Aug 23 22:33:29 localhost systemd[1]: Reached target Emergency Mode.
#Aug 23 22:33:29 localhost systemd[1]: Starting Early Qubes VM settings...
#Aug 23 22:33:29 localhost qubes-early-vm-config.sh[915]: Failed connect to local daemon
#Aug 23 22:33:29 localhost systemd[1]: Finished Early Qubes VM settings.
#Aug 23 22:33:29 localhost systemd-tmpfiles[911]: /usr/lib/tmpfiles.d/qubes-core-agent-linux.conf:1: Line references path below legacy directory /var/run/, updating /var/run/tinyproxy-updates → /run/tinyproxy-updates; please update the tmpfiles.d/ drop-in file accordingly.
#Aug 23 22:33:29 localhost systemd-tmpfiles[911]: /etc/tmpfiles.d/qubes-gpg-split.conf:1: Line references path below legacy directory /var/run/, updating /var/run/qubes-gpg-split → /run/qubes-gpg-split; please update the tmpfiles.d/ drop-in file accordingly.
#Aug 23 22:33:29 localhost systemd-udevd[529]: vda1: Process '/usr/lib/qubes/udev-block-add-change' failed with exit code 1.
#Aug 23 22:33:29 localhost systemd[1]: Mounted /rpm.
#Aug 23 22:33:29 localhost systemd[1]: Mounted /share.
#Aug 23 22:33:29 localhost systemd-udevd[536]: vda2: Process '/usr/lib/qubes/udev-block-add-change' failed with exit code 1.
Aug 23 22:33:47 localhost kvmchand[506]: [INFO] daemon/localhandler.c:620: Client connected! fd: 8
Aug 23 22:33:47 localhost kvmchand[505]: [INFO] daemon/vfio.c:1117: resp from host: err: 0, ret: 2
#Aug 23 22:33:47 localhost systemd[1]: Starting Qubes misc post-boot actions...
Aug 23 22:33:47 localhost systemd[1]: Starting Qubes remote exec agent...
Aug 23 22:33:47 localhost qrexec-agent[1028]: 2020-08-23 22:33:47.413 qrexec-agent[1028]: qrexec-agent.c:376:init: qrexec-agent.init: ctrl_vchan call
Aug 23 22:33:47 localhost kvmchand[506]: [INFO] daemon/localhandler.c:620: Client connected! fd: 9
Aug 23 22:33:47 localhost kvmchand[505]: [INFO] daemon/vfio.c:1117: resp from host: err: 0, ret: 3
Aug 23 22:33:47 localhost kernel: vfio-pci 0000:00:10.0: EDR: ACPI event 0x1 received  ## 1st RAM MEMORY DEVICE
Aug 23 22:33:47 localhost kernel: vfio-pci 0000:00:11.0: EDR: ACPI event 0x1 received  ## 2nd RAM MEMORY DEVICE

## 3rd RAM MEMORY DEVICE
## 00:12.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)
## 00:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
Aug 23 22:33:47 localhost kernel: pci 0000:00:12.0: [1af4:1110] type 00 class 0x050000
Aug 23 22:33:47 localhost kernel: pci 0000:00:12.0: reg 0x10: [mem 0x00000000-0x000000ff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:12.0: reg 0x14: [mem 0x00000000-0x00000fff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:12.0: reg 0x18: [mem 0x00000000-0x00003fff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:12.0: BAR 2: assigned [mem 0x800134000-0x800137fff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:12.0: BAR 1: assigned [mem 0xc844e000-0xc844efff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:12.0: BAR 0: assigned [mem 0xc844f000-0xc844f0ff]
########

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]

Aug 23 22:33:47 localhost kernel: vfio-pci 0000:00:12.0: Adding to iommu group 2  ## 3rd RAM MEMORY DEVICE
Aug 23 22:33:47 localhost kernel: vfio-pci 0000:00:12.0: Adding kernel taint for vfio-noiommu group on device  ## 3rd RAM MEMORY DEVICE

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]

#Aug 23 22:33:47 localhost systemd[1]: Started Serial Getty on ttyS0.

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x800200000-0x8003fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]

Aug 23 22:33:47 localhost kernel: pci_bus 0000:02: Allocating resources
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:47 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: vfio-pci 0000:00:12.0: vfio-noiommu device opened by user (kvmchand:514)  ## 3rd RAM MEMORY DEVICE
Aug 23 22:33:51 localhost.localdomain kvmchand[506]: [INFO] daemon/localhandler.c:328: fds: {10, 11, 12, 13, 14}
Aug 23 22:33:51 localhost.localdomain systemd[1]: Started Qubes DB agent.
#Aug 23 22:33:51 localhost.localdomain nm-dispatcher[1297]: /usr/lib/qubes/qubes-setup-dnat-to-ns: line 17: /var/run/qubes/qubes-ns: No such file or directory
Aug 23 22:33:51 localhost.localdomain kvmchand[506]: [INFO] daemon/localhandler.c:328: fds: {10, 11, 12, 13, 14}
Aug 23 22:33:51 localhost.localdomain qrexec-agent[1028]: 2020-08-23 22:33:51.424 qrexec-agent[1028]: qrexec-agent.c:378:init: qrexec-agent.init: ctrl_vchan returned
Aug 23 22:33:51 localhost.localdomain systemd[1]: Started Qubes remote exec agent.
#Aug 23 22:33:51 localhost.localdomain systemd[1]: Reached target Multi-User System.
#Aug 23 22:33:51 localhost.localdomain kernel: fbcon: Taking over console
Aug 23 22:33:51 localhost.localdomain kvmchand[506]: [INFO] daemon/localhandler.c:620: Client connected! fd: 10

## 4th RAM MEMORY DEVICE
## 00:13.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)
## 00:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:13.0: [1af4:1110] type 00 class 0x050000
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:13.0: reg 0x10: [mem 0x00000000-0x000000ff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:13.0: reg 0x14: [mem 0x00000000-0x00000fff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:13.0: reg 0x18: [mem 0x00000000-0x0003ffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:13.0: BAR 2: assigned [mem 0x800140000-0x80017ffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:13.0: BAR 1: assigned [mem 0xc8450000-0xc8450fff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:13.0: BAR 0: assigned [mem 0xc8451000-0xc84510ff]
########

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kvmchand[505]: [INFO] daemon/vfio.c:1117: resp from host: err: 0, ret: 4
Aug 23 22:33:51 localhost.localdomain kernel: vfio-pci 0000:00:13.0: Adding to iommu group 3  ## 4th RAM MEMORY DEVICE
Aug 23 22:33:51 localhost.localdomain kernel: vfio-pci 0000:00:13.0: Adding kernel taint for vfio-noiommu group on device  ## 4th RAM MEMORY DEVICE

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci_bus 0000:01: Allocating resources
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci_bus 0000:02: Allocating resources
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
Aug 23 22:33:51 localhost.localdomain kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

Aug 23 22:33:55 localhost.localdomain kernel: vfio-pci 0000:00:13.0: vfio-noiommu device opened by user (kvmchand:514)  ## 4th RAM MEMORY DEVICE
Aug 23 22:33:55 localhost.localdomain kvmchand[506]: [INFO] daemon/localhandler.c:328: fds: {11, 12, 13, 14, 15}
shawnanastasio commented 4 years ago

I apologize for the late response; I have been sick for the last week :(

No worries at all, I hope you are feeling better!

I tested with _sdnotify disabled and while qubes-db and qrexec do end up connecting, the system starts in emergency mode and I have had failures when attempting to run qvm-run from the host. Until further investigation I am guessing it's due to other services that depend on either qubes-db or qrexec being fully initialized as well.

At the moment, I prefer to use _sdnotify to ensure kvmchand is fully initialized before any other Qubes modules attempt to load which should prevent any race conditions. If you prefer not to have the systemd notify built-in by default or within libkvmchan we can either change the Makefile SYSTEMD option from SYSTEMD ?= 1 to SYSTEMD ?= 0 or I can just apply a patch at build time.

From what I can tell, this is happening because qubesdb is spawned immediately after kvmchand and attempts to connect to the kvmchand socket before localhandler has set it up. The current library behavior is to simply return a failure when the socket connection fails, and that failure is caught by qubesdb.

I'm a bit hesitant to include systemd-specific workarounds for what could be considered an issue with libkvmchan, but in this case it does seem like sd_notify is one of the cleaner solutions. If you could change the default value of SYSTEMD to 0 and set it manually it in your build scripts, that would be ideal.

To provide a better understanding of how kvmchand fits into the boot sequence I have provided an outline below.

The kvmchand service is wanted-by the systemd sysinit.target, which means it runs early. It also contains an ExecStartPre statement which makes sure VIFO modulus are loaded, enable_unsafe_noiommu_mode is enabled and sets a flag to indicate the system is running within a KVM guest. FYI, The qubes-db service type is also 'notify' and is also wanted by sysinit.target. Note that I plan to move the code that sets the hypervisor type to a systemd unit file named hypervisor.service which will call a script to set the hypervisor type (IE: touch /var/run/qubes-service/hypervisor-{kvm|xen}).

My goal is to be able to build a single version of Qubes that use the same packages for both Xen and KVM. For instance, the same Qubes virtual machine template image could be used within the current Qubes Xen hypervisor environment or copied to a KVM host and used unmodified.

To achieve this goal I have created systemd drop-ins for KVM that include a condition ConditionPathExists=/var/run/qubes-service/hypervisor-kvm. As it turns out this logic appears flawed though. I thought adding a drop-in based on a condition would prevent the drop-in from being applied if the hypervior-kvm file did not exist and the main unit file would still run for Xen based systems, but it seems that the drop-ins just get merged on top of the main systemd unit file and since the condition is false on a Xen based system, the unit will not run.

Since it does not seem I can add systemd drop-ins to be applied conditionally, I have focused my efforts on using a systemd generator to dynamicly add the KVM drop-ins when a KVM hypervisor is detected. The code is written and the drop-ins get created within the /var/run/systemd/generator.early directory but do not get applied during boot, but do get applied when re-starting a service, so I need to do further research.

For completeness I have also included samples of the current configuration files and scripts below. [snip]

Unfortunately I'm no expert when it comes to init systems or systemd in particular, but the idea of being able to use the same VM images on both Xen and KVM hosts does sound appealing. If there's anything kvmchand can do to make this easier, I'm open to suggestions (perhaps kvmchand can be made to detect Xen and exit gracefully? I'm not sure if this would simplify any of the service dependency logic.)

nrgaway commented 4 years ago

I'm a bit hesitant to include systemd-specific workarounds for what could be considered an issue with libkvmchan, but in this case it does seem like sd_notify is one of the cleaner solutions. If you could change the default value of SYSTEMD to 0 and set it manually it in your build scripts, that would be ideal.

Completed.

Unfortunately I'm no expert when it comes to init systems or systemd in particular, but the idea of being able to use the same VM images on both Xen and KVM hosts does sound appealing. If there's anything kvmchand can do to make this easier, I'm open to suggestions (perhaps kvmchand can be made to detect Xen and exit gracefully? I'm not sure if this would simplify any of the service dependency logic.)

I'm sure I will get the systemd generators working. I must just be missing something simple as I have created generators in the past. Xen detection can be handled by the systemd service unit by adding a condition !ConditionVirtualization="xen" so kvmchand would not need to implement anything.

shawnanastasio commented 4 years ago

Thanks!