jamesstringerparsec / Easy-GPU-PV

A Project dedicated to making GPU Partitioning on Windows easier!
4.01k stars 407 forks source link

What is exactly "Partitioning" Anyone? #298

Open po0p opened 1 year ago

po0p commented 1 year ago

Hi, can someone explain to me what is this exact "Partitioning" we are talking about? Or what is it in microsoft perspective? I am running w10 on host and vm, nvidia gpu. No gpu tinkering on host - latest driver, msi afterburner to force cooler settings. Benchmark from vm: image Benchmark from host: image (LMAO) My "partitioning" settings: (Yes, I am using 5mb video ram like a chad I am) image (LMAO2) Gpu load while benching the vm: (Graph 2) image

For the sake of curiosity I have tried changing EVERY setting there is. It did not have any effect. Only thing that seemed to matter is HighMemoryMappedIoSpace - if its lower than 8gb vm driver fails with code 43. Please explain this to me because I really feel like I am being laughed at and there is just like 12 parameters for Set-VMGpuPartitionAdapter that dont seem to do anything? Am i stupid?

po0p commented 1 year ago

Decided to run a torture test in the morning. image

po0p commented 1 year ago

Another couple of sanity tests to check with myself if i've gotten some form of severe mental retardation due to using microsoft products for 14 hours a day. (Vm is properly rebooted while changing any settings, btw) image image

po0p commented 1 year ago

OCCT Vram test. With allegedly 1 kilobyte of "partitioned" vram my small vm is able to consume almost all vram on host. image

po0p commented 1 year ago

Anyone care to explain pls?

0303rizky commented 11 months ago

Anyone care to explain pls?

Have you found the answer now? I have the same question xD

po0p commented 10 months ago

Bump. Any experts?

matti commented 10 months ago

Also interested

cokecan72 commented 7 months ago

I'd also like to know more about how this is supposed to work. From my testing, I'm seeing the same kinds of results as @po0p; regardless of what percentage is defined in the GPU partition for VRAM, Compute, Encode, Decode etc, when running anything GPU intensive in the guest VM, the host's GPU usage basically max's out. Before jumping into any of this and actually testing, my assumption was that if I had these all set to 50% of the total host GPU resources, that I'd only see around 50% of the GPU's resources being consumed on the host.

Amongst my testing I also began running benchmarks on both the host and guest at the same time to see what happened and what I found was that the host seems to always take priority in terms of GPU resource consumption. Running Unigine on both host and guest concurrently shows the relative FPS of the benchmark in the guest drop from 150-200 down to 20 while the host's remains at 150-200. The results page of Unigine also reflects this; I was seeing 24-25K when running the benchmark on both host and guest individually, but when I ran them concurrently, the host dropped to 22K while the guest was at 6K. Likewise doing something else on the host that's not as intensive as an Unigine benchmark, e.g. watching a UHD 4K video for example shows a smaller drop in relative FPS in the Unigine benchmark on the guest and a lower overall score at the end (relative FPS was down to about 90-100 and total score at the end was 16K).

This leads me to believe that if the GPU resource is available the guest will use it but will very quickly give it up for the host when needed, which is quite different from what all of these GPU-P settings suggest it should do. That behaviour is more closely aligned with how Hyper-V works with resource in general (CPU, RAM etc) which leads me to wonder if this is actually working as intended or not?

It'd be great to get some clarification from @jamesstringerparsec on this :)

po0p commented 7 months ago

my assumption was that if I had these all set to 50% of the total host GPU resources, that I'd only see around 50% of the GPU's resources being consumed on the host.

@cokecan72 This was my exact thoughts too. If i "partition" 50% gpu, be it gcpu or vmem, i was under impression I am letting this particular vm use NOT MORE than "partitioned" amount. I.e. in vmware, i can assign a certain amount of cpus and ram to a vm, and usually it will never consume more than that; at least in my experience of using vmware workstation. I haven't touched GPU-PV or hyperv after creating this issue to be honest, because most important thing for me was actually limiting the gpu performance in case I want to run 5,10,20 virtual machines; to be sure that they all get equal access to gpu resources, like it happens with access to cpu and ram (in vmware at least).