AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
141.24k stars 26.7k forks source link

Choosing which GPU to use when running #1561

Closed huotarih closed 2 years ago

huotarih commented 2 years ago

Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

My problem is that I have 4 GPU's that i'd like to load balance the requests with, which in my mind could be achieved by running multiple instances and load balancing with nginx to different ports (i.e 7860, 7861, 7862, 7863) If i could start the UI with a GPU indicator, they would all use different GPU's.

Describe the solution you'd like A clear and concise description of what you want to happen.

A launch parameter i.e --gpu[x]

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Parallelization support Additional context Add any other context or screenshots about the feature request here.

AugmentedRealityCat commented 2 years ago

I have been able to select a specific GPU by adding this line to the webui-user.bat file, (on the line just after the set COMMANDLINE_ARGS=) :

set CUDA_VISIBLE_DEVICES=1

Bear in mind that usually your main GPU will be device #0, the second will be #1, and so on and so forth. In the example above I am using my second GPU, so #1 (just like Supergreg) . If you wanted to use your 4th GPU, then you would use this line:

set CUDA_VISIBLE_DEVICES=3

This was never documented specifically for Automatic1111 as far as I can tell - this is coming from the initial Stable Diffusion branch launched in august, and since Automatic1111 was based on that code, I thought it might just work. And it did !

One caveat: Windows GPU monitoring via the Task Manager performance tab will reflect the use of VRAM on that GPU, but not the computing part, for some unknown reason. Now I look at its temperature to check if it's working hard or not.

jtoy commented 1 year ago

I was testing this and it doesnt seem to work, will test more!

Chris7c0 commented 1 year ago

On Linux, I edited --device-id=n in webui-user.sh to enable on my second GPU. In my case:

export COMMANDLINE_ARGS="--device-id=1"

flybfree commented 1 year ago

I had been using set CUDA_VISIBLE_DEVICES=1 to select my GPU with the most memory. After a pull this morning it no longer works and it is always using GPU 0 instead of GPU 1. Was there a change to the GPU selection ability? Update: It may be working. When I run the system info it shows my primary 3070Ti with 8GB. however when I actually run the txt2img, the info at the bottom of that page shows 12GB VRAM which is what the 3060 the desired card has on it. So if that is right, the system information extension is reporting wrong. What made me look was i ran out of memory during an upscale which had never happened before. So, is there a way to know for certain which card is being utilized?

UPDATE: CUDA_VISIBLE_DEVICES=1 is working. The issue is actually with the system information extension display.

AugmentedRealityCat commented 1 year ago

So, is there a way to know for certain which card is being utilized?

I check the VRAM usage in the Performance tab of the Task Manager (under Windows 10). You can see it jump up and down.

travisscottwilder commented 1 year ago

On Linux, I edited --device-id=n in webui-user.sh to enable on my second GPU. In my case:

export COMMANDLINE_ARGS="--device-id=1"

(ubuntu linux)

This worked for me, HOWEVER, it is still sharing memory with GPU0. When monitoring my GPUs I notice that when using --device-id=1 there is always a small initial memory usage on GPU0 (like 700mb) and then all other usage & memory is honored by the specified device-id

HOWEVER, if you use CUDA_VISIBLE_DEVICES then the GPU is honored 100% of the time and no small memory usage is on GPU0.

It is also important to note that I wasn't able to specify CUDA_VISIBLE_DEVICES AND --device-id, when both were specified I received CUDA errors when running webui.sh

--

[THIS ADDED MEMORY TO GPU0 and then uses GPU1] webui-user.sh: export COMMANDLINE_ARGS="--listen --port=2 --device-id=1"

--

[THIS GAVE ME ERRORS] webui-user.sh: export COMMANDLINE_ARGS="--listen --port=2 --device-id=1" export CUDA_VISIBLE_DEVICES=1

--

[THIS USES GPU 1 100% of the time, good to go] webui-user.sh: export COMMANDLINE_ARGS="--listen --port=2" export CUDA_VISIBLE_DEVICES=1

48design commented 1 year ago

Maybe this is something for you: #11614

chunyu-li commented 1 year ago

On Linux, I edited --device-id=n in webui-user.sh to enable on my second GPU. In my case: export COMMANDLINE_ARGS="--device-id=1"

(ubuntu linux)

This worked for me, HOWEVER, it is still sharing memory with GPU0. When monitoring my GPUs I notice that when using --device-id=1 there is always a small initial memory usage on GPU0 (like 700mb) and then all other usage & memory is honored by the specified device-id

HOWEVER, if you use CUDA_VISIBLE_DEVICES then the GPU is honored 100% of the time and no small memory usage is on GPU0.

It is also important to note that I wasn't able to specify CUDA_VISIBLE_DEVICES AND --device-id, when both were specified I received CUDA errors when running webui.sh

--

[THIS ADDED MEMORY TO GPU0 and then uses GPU1] webui-user.sh: export COMMANDLINE_ARGS="--listen --port=2 --device-id=1"

--

[THIS GAVE ME ERRORS] webui-user.sh: export COMMANDLINE_ARGS="--listen --port=2 --device-id=1" export CUDA_VISIBLE_DEVICES=1

--

[THIS USES GPU 1 100% of the time, good to go] webui-user.sh: export COMMANDLINE_ARGS="--listen --port=2" export CUDA_VISIBLE_DEVICES=1

I have been looking for a method to avoid small memory usage in GPU 0 for a long time, and your advice really helps me!

Mytrea commented 11 months ago

I use an EGPU on a laptop that already has a Nvidia Graphic card in addition to the integrated card. (IG device = 0, DG Device =1 and EGPU= 2) Using Cuda_visible_devices =2 was not working, so i tried =1 and it worked. I'm thinking because the Integrated Graphics are not Cuda capable, it might just ignore that one when you assign which device to be used. Not sure, but hopefully this helps someone having that same problem.

StudioDUzes commented 8 months ago

"set CUDA_VISIBLE_DEVICES 1" without "=" OK for windows

@echo off

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS= --port 7861 --ckpt-dir "N:\Models SDXL" --lora-dir "N:\Lora SDXL" --vae-dir "N:\VAE" --esrgan-models-path "N:\ESRGAN"

set CUDA_VISIBLE_DEVICES 1

call webui.bat