jonomango / hv

Lightweight Intel VT-x Hypervisor.
MIT License
406 stars 86 forks source link

Game loads slowly when using hv #23

Open thewolfram opened 1 year ago

thewolfram commented 1 year ago

hey there, I use hv while playing eft and I noticed that the loot loading stage takes a very long time about 5 minutes, but when I play without hv it loads in 1 minute max. I found on unknowncheats that people also had this problem when they were playing on virtual machines, someone suggested that game spams cpuid while loot is loading, but I'm not sure if thats the true reason https://www.unknowncheats.me/forum/escape-from-tarkov/490919-stuck-loading-loot-qemu-kvm.html https://www.unknowncheats.me/forum/escape-from-tarkov/568814-loading-loot-slow-kvm.html

EDIT: also some guy said to use invtsc, this might be related too

jonomango commented 1 year ago

@thewolfram Sorry for the late reply! invtsc isn't related to this problem, although the game spamming cpuid is a much more realistic scenario. You can just use the usermode logger and add a HV_LOG_INFO() call somewhere around here that prints the exit reason (make sure to ignore VMCALL exits though!).

chiefmasterR commented 1 year ago

Its ok, I'm replying from another account because unable to access mine, anyway some guy posted that it is for sure happening because game is spamming cpuid. Do you think its possible to somehow mitigate this?

EDIT: here is the link to his post https://www.unknowncheats.me/forum/3497830-post11.html

jonomango commented 1 year ago

Its ok, I'm replying from another account because unable to access mine, anyway some guy posted that it is for sure happening because game is spamming cpuid. Do you think its possible to somehow mitigate this?

It's actually almost impossible to mitigate this with intel CPUs, since cpuid is an unconditional vm-exit (if that really is the cause of the lag). Although, it is possible to emulate the subsequent cpuid instructions after the first vm-exit, which is detailed in a VMware paper, but way too complex for a side project like this.

chiefmasterR commented 1 year ago

Its ok, I'm replying from another account because unable to access mine, anyway some guy posted that it is for sure happening because game is spamming cpuid. Do you think its possible to somehow mitigate this?

It's actually almost impossible to mitigate this with intel CPUs, since cpuid is an unconditional vm-exit (if that really is the cause of the lag). Although, it is possible to emulate the subsequent cpuid instructions after the first vm-exit, which is detailed in a VMware paper, but way too complex for a side project like this.

I think it would be a good solution to devirtualize CPUs while game is loading and virtualize them back when its finished. Could there be any problems?

jonomango commented 1 year ago

Its ok, I'm replying from another account because unable to access mine, anyway some guy posted that it is for sure happening because game is spamming cpuid. Do you think its possible to somehow mitigate this?

It's actually almost impossible to mitigate this with intel CPUs, since cpuid is an unconditional vm-exit (if that really is the cause of the lag). Although, it is possible to emulate the subsequent cpuid instructions after the first vm-exit, which is detailed in a VMware paper, but way too complex for a side project like this.

I think it would be a good solution to devirtualize CPUs while game is loading and virtualize them back when its finished. Could there be any problems?

That's really not a bad idea at all, but you'll essentially have a very easy-to-detect driver loaded before the hypervisor can get started. If you can manage to stay undetected in that short period of time, then it's a great solution, but... at that point you would need to already have an undetected driver, so you might as well just use that and ditch the hypervisor completely 😄.

chiefmasterR commented 1 year ago

I've got another idea which is get the return address in our cpuid emulation function so we find out who is spamming it, and then ept hook the page and nop the cpuid instruction, still I don't know how good is it.

jonomango commented 1 year ago

I've got another idea which is get the return address in our cpuid emulation function so we find out who is spamming it, and then ept hook the page and nop the cpuid instruction, still I don't know how good is it.

That could work, but it would have to be specific to the game/anticheat. A NOP probably wouldn't work, but the idea is good. It just wouldn't work as a general solution.

chiefmasterR commented 1 year ago

So, I'm currently trying to do what I said, I will log calls inside of emulate_cpuid(), but I have question how do I get the return address?

jonomango commented 1 year ago

What do you mean by return address? You can check guest rip in the vmcs to see who called cpuid, but finding the return address would be difficult without unwinding the frame.

chiefmasterR commented 1 year ago

I mean just get the address of someone who called cpuid instruction Should I just do this? vmx_vmread(VMCS_GUEST_RIP);

EDIT: yes it is vmx_vmread(VMCS_GUEST_RIP)

nulledc0de commented 10 months ago

Its ok, I'm replying from another account because unable to access mine, anyway some guy posted that it is for sure happening because game is spamming cpuid. Do you think its possible to somehow mitigate this?

It's actually almost impossible to mitigate this with intel CPUs, since cpuid is an unconditional vm-exit (if that really is the cause of the lag). Although, it is possible to emulate the subsequent cpuid instructions after the first vm-exit, which is detailed in a VMware paper, but way too complex for a side project like this.

sorry to bump this old post but i’m also running into the issues of it running slow i was curious if you had a link to the VMware paper you were referring to?

Ivann-n commented 5 months ago

Its ok, I'm replying from another account because unable to access mine, anyway some guy posted that it is for sure happening because game is spamming cpuid. Do you think its possible to somehow mitigate this?

It's actually almost impossible to mitigate this with intel CPUs, since cpuid is an unconditional vm-exit (if that really is the cause of the lag). Although, it is possible to emulate the subsequent cpuid instructions after the first vm-exit, which is detailed in a VMware paper, but way too complex for a side project like this.

I too am interested in this paper