Closed SheepChef closed 1 year ago
It seems to work properly for me.
--backend directml --device-id 1
GPU0
GPU1
It seems to work properly for me.
--backend directml --device-id 1
GPU0 GPU1
As you can see, I've already add --device-id 1, however, the process still uses the GPU0 and the GPU1 is unused all the time.
Run these lines in venv activated powershell/prompt.
(venv) $ python
>>> import torch_directml
>>> torch_directml.device_name(0) # --device-id 0
>>> torch_directml.device_name(1) # --device-id 1
not sure if this will help but i have a 5600 apu with a 580 and can choose the apu or the card to run python through windows 11 settings. Not sure if this is in win 10.
right click desktop, select display settings, click on the graphics button. find python if its in the list, if not add it. once added you can change the gpu used for that program.
ok, the problem is solved. However, it is a pity that even my AMD graphics cannot run the model, which have only 2GB memory.
Is there an existing issue for this?
What happened?
Well, my laptop have two GPU devices. GPU0 is an Intel integrated graphic card, which, actually, uses cpu to calculate. GPU1 is a basic AMD discrete graphic card.
However, the GPU1 remains zero occupation during loading models, until the script finally crashed due to the lack of RAM. I've already add --device-id 1, however, the calculation seems still using the GPU0 and finally gives a Runtime Error.
Steps to reproduce the problem
There is no reproduction.
What should have happened?
GPU1 is used to calculate rather than GPU0(CPU)
Version or Commit where the problem happens
4f46f9bd54e0ce25a50fa3c04b82e9bf74b97c66
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
AMD GPUs (RX 5000 below)
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
No
Console logs
Additional information
No response