Closed arthurrasmusson closed 2 years ago
VirtIO Driver Version: Tested on virtio-win-0.1.215.iso & virtio-win-0.1.96.iso & virtio-win-0.1.187.iso
virtio-win-0.1.96.iso doesn't contain VirtIO-FS. How could it even be tested?
VirtIO Driver Version: Tested on virtio-win-0.1.215.iso & virtio-win-0.1.96.iso & virtio-win-0.1.187.iso
virtio-win-0.1.96.iso doesn't contain VirtIO-FS. How could it even be tested?
Maybe it wasn't that one. I tested 3 different versions total.
Why then claim something that you are not sure about?
I thought that was the one I used. It was another one.
Does your QEMU crash along with virtiofsd?
Before package updates no. After updates yes, both crash on VM start.
Have you tried to remove vhost-user-fs-pci
?
These messages
virtio_loop: Unexpected poll revents 11
virtio_loop: Exit
appears because the guest doesn't properly finalize connection to virtiofsd (which is normal for VirtIO-FS on Windows at the moment) or even didn't set up the connection at all.
After updates yes, both crash on VM start.
Looks like you are experiencing QEMU problem, not virtio-win.
Since updating packages in Arch service now crashes immediately upon VM start with no changes to other parameters mentioned above.
Especially if it began to happen after QEMU upgrade.
Do you have any error messages from QEMU?
Do you have any error messages from QEMU?
No, QEMU does not appear to output error messages.
appears because the guest doesn't properly finalize connection to virtiofsd (which is normal for VirtIO-FS on Windows at the moment) or even didn't set up the connection at all.
This also occurred on the host sometimes during VM runtime (prior to update breaking VM start).
Also QEMU currently works perfect without the VirtIO-FS parameters. Starting the same VM with identical parameters (with VirtIO-FS removed) works without issue.
Could you please share full QEMU command-line?
Please also share how do you run QEMU without VirtIO-FS.
You can also add -d
to virtiofsd
command-line to obtain debug output.
If you write QEMU command-lines as a text here, I can try to reproduce your problem.
You can also add -d to virtiofsd command-line to obtain debug output.
Yes, here's the debug log output:
[2022-01-25 21:30:55.948856+0000] [ID: 00002250] virtio_session_mount: Waiting for vhost-user socket connection... [2022-01-25 21:30:58.173424+0000] [ID: 00002250] virtio_session_mount: Received vhost-user socket connection [2022-01-25 21:30:58.175048+0000] [ID: 00000001] virtio_loop: Entry [2022-01-25 21:30:58.175074+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.205462+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.205489+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.205575+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.205591+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.205678+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.205690+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.205694+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.205701+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.205782+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.205797+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.205882+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.205892+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.205975+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.205980+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.205984+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.205990+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.206047+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.206054+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:30:58.206058+0000] [ID: 00000001] virtio_loop: Got VU event [2022-01-25 21:30:58.206063+0000] [ID: 00000001] virtio_loop: Waiting for VU event [2022-01-25 21:31:07.669356+0000] [ID: 00000001] virtio_loop: Unexpected poll revents 11 [2022-01-25 21:31:07.671650+0000] [ID: 00000001] virtio_loop: Exit
(reformatted strings @viktor-prutyanov)
Working QEMU command (no VirtIO-FS):
/bin/qemu-system-x86_64 -D /home/user/.local/libvf.io/logs/qemu/fcceeeb9-a873-4e21-858f-4435055da6f8-session.txt -no-hpet -nographic -vga none -serial none -parallel none -device qemu-xhci,p2=15,p3=15,id=usb -device virtio-serial-pci,id=virtio-serial0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -chardev,spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -device ivshmem-plain,id=shmem0,memdev=ivshmem_kvmfr -object memory-backend-file,id=ivshmem_kvmfr,mem-path=/dev/shm/kvmfr-fcceeeb9-a873-4e21-858f-4435055da6f8,size=128M,share=yes -device,ivshmem-plain,id=shmem1,memdev=ivshmem_kvmsr -object memory-backend-file,id=ivshmem_kvmsr,mem-path=/dev/shm/kvmsr-fcceeeb9-a873-4e21-858f-4435055da6f8,size=2M,share=yes -uuid fcceeeb9-a873-4e21-858f-4435055da6f8 -machine pc-q35-4.2,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu host,ss=on,vmx=on,pcid=on,-hypervisor,arat=on,tsc-adjust=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaveopt=on,pdpe1gb=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,hv-vapic,hv_time,hv-spinlocks=0x1fff,hv-vendor-id=null,kvm=off,topoext=on -rtc clock=host,base=localtime -m 8192 -smp cores=4,threads=1,sockets=1 -hda /home/user/.local/libvf.io/kernel/windows.arc --enable-kvm -device vfio-pci,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/3a618d36-f724-44f6-aaa6-1f14fdbf383d,display=off -device rtl8139,netdev=net0 -netdev user,id=net0,hostfwd=tcp::2222-:22 -qmp unix:/tmp/sockets/fcceeeb9-a873-4e21-858f-4435055da6f8/main.sock,server,nowait -qmp unix:/tmp/sockets/fcceeeb9-a873-4e21-858f-4435055da6f8/master.sock,server,nowait -mem-path /dev/hugepages -set device.hostdev0.x-pci-device-id=6960
QEMU which breaks during runtime (before update) and at VM start (since update):
/bin/qemu-system-x86_64 -D /home/user/.local/libvf.io/logs/qemu/fcceeeb9-a873-4e21-858f-4435055da6f8-session.txt -no-hpet -nographic -vga none -serial none -parallel none -device qemu-xhci,p2=15,p3=15,id=usb -device virtio-serial-pci,id=virtio-serial0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -chardev,spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -device ivshmem-plain,id=shmem0,memdev=ivshmem_kvmfr -object memory-backend-file,id=ivshmem_kvmfr,mem-path=/dev/shm/kvmfr-fcceeeb9-a873-4e21-858f-4435055da6f8,size=128M,share=yes -device,ivshmem-plain,id=shmem1,memdev=ivshmem_kvmsr -object memory-backend-file,id=ivshmem_kvmsr,mem-path=/dev/shm/kvmsr-fcceeeb9-a873-4e21-858f-4435055da6f8,size=2M,share=yes -uuid fcceeeb9-a873-4e21-858f-4435055da6f8 -machine pc-q35-4.2,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu host,ss=on,vmx=on,pcid=on,-hypervisor,arat=on,tsc-adjust=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaveopt=on,pdpe1gb=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,hv-vapic,hv_time,hv-spinlocks=0x1fff,hv-vendor-id=null,kvm=off,topoext=on -rtc clock=host,base=localtime -m 8192 -smp cores=4,threads=1,sockets=1 -hda /home/user/.local/libvf.io/kernel/windows.arc --enable-kvm -device vfio-pci,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/3a618d36-f724-44f6-aaa6-1f14fdbf383d,display=off -device rtl8139,netdev=net0 -netdev user,id=net0,hostfwd=tcp::2222-:22 -qmp unix:/tmp/sockets/fcceeeb9-a873-4e21-858f-4435055da6f8/main.sock,server,nowait -qmp unix:/tmp/sockets/fcceeeb9-a873-4e21-858f-4435055da6f8/master.sock,server,nowait -mem-path /dev/hugepages -set device.hostdev0.x-pci-device-id=6960
The following works with QEMU 6.2.0 without any crashes:
/usr/libexec/virtiofsd --socket-path=/tmp/socket2 -o source=/home/vp/vms/viofs -o cache=always -d &
/usr/local/bin/qemu-system-x86_64 \
-no-hpet -device ivshmem-plain,id=shmem0,memdev=ivshmem_kvmfr -object memory-backend-file,id=ivshmem_kvmfr,mem-path=/dev/shm/kvmfr-4adefd5e-71b8-4daf-8e87-ec3dd4a51fec,size=128M,share=yes -device ivshmem-plain,id=shmem1,memdev=ivshmem_kvmsr -object memory-backend-file,id=ivshmem_kvmsr,mem-path=/dev/shm/kvmsr-4adefd5e-71b8-4daf-8e87-ec3dd4a51fec,size=2M,share=yes -uuid 4adefd5e-71b8-4daf-8e87-ec3dd4a51fec -machine pc-q35-4.2,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu host,ss=on,vmx=on,pcid=on,-hypervisor,arat=on,tsc-adjust=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaveopt=on,pdpe1gb=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,hv-vapic,hv_time,hv-spinlocks=0x1fff,hv-vendor-id=null,kvm=off,topoext=on -rtc clock=host,base=localtime -m 8192 -smp cores=4,threads=1,sockets=1 -hda /home/vp/vms/win2k19-2.qcow2 --enable-kvm -device rtl8139,netdev=net0 -netdev user,id=net0,hostfwd=tcp::2222-:22 -mem-path /dev/hugepages -object memory-backend-memfd,id=mem,size=8G,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/tmp/socket2 -device vhost-user-fs-pci,chardev=char0,tag=lime,queue-size=1024
Your virtiofsd
debug log shows that there are no VirtIO queue made, because neither fv_queue_set_started:
nor fv_queue_thread:
are present.
The normal flow is:
[31578597780497] [ID: 00093040] virtio_session_mount: Waiting for vhost-user socket connection...
[31578611395551] [ID: 00093040] virtio_session_mount: Received vhost-user socket connection
[31578613495737] [ID: 00000001] virtio_loop: Entry
[31578613515785] [ID: 00000001] virtio_loop: Waiting for VU event
[31578625569177] [ID: 00000001] virtio_loop: Got VU event
[31578625613376] [ID: 00000001] virtio_loop: Waiting for VU event
........
[31679074434192] [ID: 00000001] virtio_loop: Waiting for VU event
[31679074436298] [ID: 00000001] virtio_loop: Got VU event
[31679074439247] [ID: 00000001] fv_queue_set_started: qidx=0 started=1
[31679074531382] [ID: 00000001] virtio_loop: Waiting for VU event
[31679074536220] [ID: 00000001] virtio_loop: Got VU event
[31679074552243] [ID: 00000001] virtio_loop: Waiting for VU event
[31679074554312] [ID: 00000001] virtio_loop: Got VU event
[31679074565542] [ID: 00000001] virtio_loop: Waiting for VU event
[31679074615680] [ID: 00000003] fv_queue_thread: Start for queue 0 kick_fd 8
[31679074623950] [ID: 00000003] fv_queue_thread: Waiting for Queue 0 event
[31679074626607] [ID: 00000003] fv_queue_thread: Got queue event on Queue 0
[31679074630502] [ID: 00000003] fv_queue_thread: Queue 0 gave evalue: 2 available: in: 0 out: 0
[31679074632784] [ID: 00000003] fv_queue_thread: Waiting for Queue 0 event
So, looks like VirtIO-FS Windows driver is not involved.
I'm also having this issue however i was using libvirt and virt manager to test out virtio fs devices. I used Winfsp-1.10.22006.msi and the stable virtio drivers from here both downloaded yesterday as i was testing. Normal files worked fine but when i tried any kind of executable the service immediately died and whatever application would either not start or in the case of steam give an disk write error even with a small game . If it would help the issue i can provide my xml from libvirt. and other than virtio device drivers, winfsp, steam and the virtio fs service its a stock install of windows.
If you need logs i dont know how to add the -d
option to virtiofsd
as i dont run it manually (its probably handled by libvirt automatically) but if you can tell me what to do to get the logs i can also provide those.
Hi @GrandtheUK,
drivers from here both downloaded yesterday
I suppose you are using 0.1.215
in the case of steam give an disk write error even with a small game
I think there are 2 options possible:
Which one is yours?
As for the 1st option, there is an issue https://github.com/virtio-win/kvm-guest-drivers-windows/issues/669 where Steam is unable to access his own DLLs because of VirtIO-FS's case-sensitivity. We're working on it.
As for the 2nd option, I've reproduced a game crash, but VirtIO-FS service is alive.
I think there are 2 options possible:
- Steam is installed on VirtIO-FS along with game library
Steam is installed on NTFS, but game library is stored on VirtIO-FS
Which one is yours?
i've tried both. installing steam to the virtio-fs and the installer worked but the service died on first starting steam when it was grabbing updates. but also when its just the steam library, with the games already there (windows versions) and attempting to run them crashes the service. also if i try to run other games/programs executables directly it crashes the service.
also if i try to run other games/programs executables directly it crashes the service.
Does any executable crash the service? Even like this?
C:\Users\Administrator>copy C:\Windows\System32\calc.exe Z:\
1 file(s) copied.
C:\Users\Administrator>Z:\calc.exe
i'll give it a shot
copying calc.exe across and running it works. i'll try running some of the other programs and games through cmd to see if there is any errors there
Please give examples of "some other programs"
I have Deltarune, which is probably the only game i have on the shared that i can run on the vm at the moment but i could attempt to install notepad++ to the shared folder and check that as well. but when i ran deltarune there wasnt any obvious error code it just exited out and the service died the same as before.
I've created the separate issue because this issue is dedicated to VM crash.
I was able to fix the bug where the virtiofsd host service crashes when starting a VM by switching from Arch to Fedora. I'm not sure why this fixed the problem - maybe there is something improperly configured in Arch or a bug with the binary on Arch.
VirtIO Driver Version: Tested on virtio-win-0.1.215.iso & virtio-win-0.1.96.iso & virtio-win-0.1.187.iso
WinFSP Version: Tested on winfsp-1.10.22006.msi (WinFSP 2022) & winfsp-1.8.20304.msi (WinFSP 2020.2)
QEMU Version: 6.2.0
Guest OS: Tested on Windows 10 Pro & Windows 10 LTSC
Host OS: Arch Linux
QEMU parameters used: -mem-path /dev/hugepages -object memory-backend-memfd,id=mem,size=8G,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/home/user/.local/libvf.io/sockets/virtio-fs-001.sock -device vhost-user-fs-pci,chardev=char0,tag=lime,queue-size=1024
Host virtiofsd commands used: sudo /usr/lib/qemu/virtiofsd --socket-path=/home/user/.local/libvf.io/sockets/virtio-fs-001.sock -o source=/tmp/vm-001 -o cache=always sudo chgrp kvm /home/user/.local/libvf.io/sockets/virtio-fs-001.sock sudo chmod g+rxw /home/user/.local/libvf.io/sockets/virtio-fs-001.sock
Description of problem: When attempting to execute programs from the shared VirtIO-FS directory the Windows virtiofsd service crashes. This also occurs when attempting to delete or modify large files. Since updating packages in Arch the host virtiofsd service now crashes immediately upon VM start with no changes to other parameters mentioned above.