zhaodice / qemu-anti-detection

A patch to hide qemu itself, bypass mhyprot,EAC,nProtect / VMProtect,VProtect, Themida, Enigma Protector,Safegine Shielden
746 stars 113 forks source link

How about the performance after using it? #71

Closed BoheSama1999 closed 4 months ago

BoheSama1999 commented 5 months ago

How about the performance after using it?

zhaodice commented 5 months ago

you can get 80% CPU performance and 98% GPU performance by passthough

Samuil1337 commented 4 months ago

Moew~, don't think I didn't see your starred repos. VirtIO is completely out of the question, because QEMU is hidden well enough that the guest driver won't be able to detect the virtual hardware. Using emulated hardware will be slower and cause quite a bit of CPU overhead, so the best solution would be passing through as much as possible: GPU, peripherals, NVMe. The only thing I wouldn't pass through is the complete network card for security reasons. If you have this specific hardware setup, you wouldn't feel any performance difference.

BoheSama1999 commented 4 months ago

meow~ Thank you for your answer. I passed through my USB controller, GPU, and used SRIOV to pass through the network card. I pined the CPU then isolated the CPU core, passed disk and Bluetooth. However, I felt that the most obvious issue was the severe decline in memory performance. I tried using hugepage, and although the performance did improve significantly, the difference was still a bit obvious compared to the physical machine. I realized that the default hugepage size was only 2M per chunk, so I set it to 1Gx18 in the kernel cmd in grub. When I played the game again, its frame rate improved significantly, only about 10fps less than the host machine

Samuil1337 commented 4 months ago

Oh wow. That's a real machine with extra steps. I actually don't think that RAM performance is that bad without static hugepages. Did you benchmark memory inside the VM in any way, so you can prove that it's worth the loss of RAM capacity on the host?

BoheSama1999 commented 4 months ago

Oh wow. That's a real machine with extra steps. I actually don't think that RAM performance is that bad without static hugepages. Did you benchmark memory inside the VM in any way, so you can prove that it's worth the loss of RAM capacity on the host?

I checked the memory and cache benchmark of AIDA64. Whether it is set with static hugepages or not, the gap in the data is not very obvious. But I found a problem that seems to be serious. Regardless of whether 1Ghugepages is used or not, it shows that the memory cache performance of the guest machine is significantly lower than the host performance. The data shows that the read, write and copy of the speed of memory are ten times different from the host, and the latency is more than 10 times. From the perspective of CPU cache performance, it seems that the cpu cache performance of guest machines is not very optimistic, the cache performance is dozens of times different, and the latency is the same .

Here about the benchmark result

Neofetch image

Hardware infomations Link

1.Host test cachemem

2.Guest without 1GstaticHugepages # 9Cores for Guest,1Core for Host b The xml : Link

3.Guest with 1GstaticHugepages # 9Cores for Guest,1Core for Host a The xml : Link

Samuil1337 commented 4 months ago

Oh wow that's interesting. Thanks for the info

Samuil1337 commented 4 months ago

I actually tested transparent huge pages (2 MiB each) against static ones (1 GiB) on my AMD Ryzen 7 3700x.

THP transparent_hugepages

SHP static_hugepages

The statically allocated pages were almost as fast, but had half a nano second less latency. Both results are not scientific at all because I ran the tests only once on free AIDA. However, they do align with Red Hat's benchmarks which have shown a 2% performance increase with 1 GiB SHP and prove that in my case it is not worth the lost RAM on the host. Something about your test results is weird though. The caches of your CPU, which came out 2 years after mine, are extremely slow; compared to your real hardware and especially my VM. Try adding <cache mode="passthrough"/> to the CPU section and double-check your CPU pinning. Please post your output of lscpu -e, so I can look over it too.