virtio-win / kvm-guest-drivers-windows

Windows paravirtualized drivers for QEMU\KVM
https://www.linux-kvm.org/page/WindowsGuestDrivers
BSD 3-Clause "New" or "Revised" License
2.06k stars 386 forks source link

Errenous RequestDuration measurement using ETW #231

Closed tarihi closed 2 years ago

tarihi commented 6 years ago

I have been trying to measure per-request response time under KVM as seen from Windows using the methodology described here. The disk is passed through using virtio-scsi to the VM, but the "RequestDuration" value returned by the ETW is always 0. Exact same disk generates correct results when I boot directly from it. I am using Fedora virtio drivers.

Is this because of the fundamental design virtio-scsi driver? Or is the driver not initializing a data structure, flipping a bit, etc?

Thanks for all the efforts!

vrozenfe commented 6 years ago

Hi tarihi, Thank you for reporting this issue. At the moment vioscsi (just like viostor) doesn't provide support for an optional HwTracingEnabled routine. Which in turn prevents us from logging and reporting IoTargetRequestServiceTime.

It will be added soon, in one of the following versions. All the best, Vadim.

peixiu commented 3 years ago

Hi all,

I tried to reproduce this issue on rhel8.3.1 host, reproduce step was similar with comment#0, I can reproduce this issue. I tested with a qemu simulated raw file, did not test with a virtio-scsi passthrough disk. Tried with virtio-blk(viostor) and virtio-scsi(vioscsi) driver, both reproduced this issue, the "RequestDuration" value returned by the ETW is always 0.

Qemu commands: For virtio-scsi: -device virtio-scsi-pci,id=scsi0,bus=root2.0, -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=192SCS126435AWK_test.raw,node-name=my_scsi -blockdev driver=raw,node-name=myscsi,file=my_scsi -device scsi-hd,bus=scsi0.0,drive=myscsi,id=scsi-disk0,serial=whql_test For virtio-blk: -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/win8-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \ -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \

Used version: kernel-4.18.0-240.11.1.el8_3.x86_64 qemu-kvm-5.1.0-17.module+el8.3.1+9213+7ace09c3.x86_64 seabios-bin-1.14.0-1.module+el8.3.0+7638+07cf13d2.noarch virtio-win-prewhql-192

Best Regards~ Peixiu

vrozenfe commented 3 years ago

I do have some patch set to deal with IoTargetRequestServiceTime notification https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/storport/nf-storport-storportnotification

Honestly, I didn't find it very useful for tracing performance issues, mostly because the same results can be obtained by checking the difference between two time stamps - when a particular SRB was completed and when it was issued. However, I can submit my patches if you think that it still can be useful.

Cheers, Vadim

peixiu commented 3 years ago

I do have some patch set to deal with IoTargetRequestServiceTime notification https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/storport/nf-storport-storportnotification

Honestly, I didn't find it very useful for tracing performance issues, mostly because the same results can be obtained by checking the difference between two time stamps - when a particular SRB was completed and when it was issued. However, I can submit my patches if you think that it still can be useful.

Thanks for your explanation, I got it, and I agree to close this issue.

Best Regards~ Peixiu