Closed bigmac-79 closed 1 month ago
You can tell the VM which GPU by naming it in the config section of the script my friend :) you need to redo the VM after making the change in the script.
You can tell the VM which GPU by naming it in the config section of the script my friend :) you need to redo the VM after making the change in the script.
Except on Windows 10, where you can't choose the GPU to use. That's documented, and they noted it.
The only real solution here is to not have the iGPU present on the host OS to begin with, meaning disable it. Or upgrade to Windows 11.
You can tell the VM which GPU by naming it in the config section of the script my friend :) you need to redo the VM after making the change in the script.
Except on Windows 10, where you can't choose the GPU to use. That's documented, and they noted it.
The only real solution here is to not have the iGPU present on the host OS to begin with, meaning disable it. Or upgrade to Windows 11.
You are right, I missed that part. Should upgrade to Windows 11 anyway as Windows 10 is being EOL'd in 2025
Thanks for those that confirmed there is not way to do this gracefully. I was considering disabling the iGPU, I'm not ready to upgrade to windows 11, I've got a lot of other software running on this machine that I don't have time to check if everything is going to be compatible right now.
I found a temporary workaround for anyone else with this issue. First, I tired to severely reduce the resource percentage each VM is allocated so that when choosing the GPU it didn't look like the dGPU was overallocated, but that made no difference. In testing this I found that it seems to choose the GPU based purely on the number of VM's that have been assigned a specific GPU--when you start up a VM it will assign the GPU that's been assigned to the fewest VM's currently running. You can manipulate this pattern by creating multiple extra VM's with GPU-PV and opening them in a specific order so that the VM's you actually intend to use get assigned the dGPU and the extra VM's get assigned the iGPU, then you can shut down the extra VM's so they don't waste system resources. I've made these VM's as lightweight as possible, but a less powerful PC might struggle with a bunch of extra VMs running. Obviously this isn't an ideal solution, but it gets the job done of running multiple VM's all from a single dGPU without upgrading to Windows 11 or disabling the iGPU.
Glad you found an work around for your situation, this is better in win11 as @Kodikuu mentioned, so hopefully you can upgrade soon or just keep using this :)
I'm running Hyper-V on windows 10, used this script to create 2 windows 10 VMs. On Windows 10 I can't choose a specific GPU to set and have to let it automatically choose one. Whenever I turn the VMs on, the first one I turn on chooses my RTX 3080, which is what I want. But then when I start up the second VM, it chooses my integrated graphics chip, which is not desirable. If I start the VM's up in the opposite order, then whichever starts first gets the 3080, so I know that neither have trouble accessing it. I would like both VMs to be able to share the GPU, and I have read about people running up to 3 VM's using a single GPU, plus the host OS, for a total of 4 users using the same GPU at once.
So even though I know I can't select a specific GPU for the VM to use on Windows 10, is there any way to remove the integrated graphics as an option for the VM to choose when it starts up?