lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.45k stars 825 forks source link

AMD GPU does not use 100% of its power when using SDXL models #404

Open yacinesh opened 8 months ago

yacinesh commented 8 months ago

Checklist

What happened?

my AMD GPU doesn't use the 100% when use SDXL models while generating images

Steps to reproduce the problem

  1. select sdxl model
  2. type prompt
  3. click on generat

What should have happened?

should generate image using 100% of the gpu power

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-02-25-19-35.json

Console logs

Calculating sha256 for C:\a1111\webui_forge_cu121_torch21\webui\models\Stable-diffusion\juggernautXL_version6Rundiffusion.safetensors: 1fe6c7ec54c786040cdabc7b4e89720069d97096922e20d01f13e7764412b47f
Loading weights [1fe6c7ec54] from C:\a1111\webui_forge_cu121_torch21\webui\models\Stable-diffusion\juggernautXL_version6Rundiffusion.safetensors
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.02 seconds
Model loaded in 99.0s (unload existing model: 1.1s, calculate hash: 7.1s, load weights from disk: 1.0s, forge load real models: 87.7s, forge finalize: 0.1s, load textual inversion embeddings: 0.2s, calculate empty prompt: 1.7s).
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  1024.0
[Memory Management] Model Memory (MB) =  9794.134841918945
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  -9794.134841918945
[Memory Management] Requested SYNC Preserved Memory (MB) =  0.0
[Memory Management] Parameters Loaded to SYNC Stream (MB) =  9794.096694946289
[Memory Management] Parameters Loaded to GPU (MB) =  0.0
Moving model(s) has taken 0.08 seconds
 68%|███████████████████████████████████████████████████████▊                          | 17/25 [13:36<06:44, 50.57s/it]
Total progress:  68%|████████████████████████████████████████████▉                     | 17/25 [12:30<06:43, 50.48s/it]

Additional information

No response

yacinesh commented 8 months ago

image My GPU usage when generating

Postmoderncaliban commented 8 months ago

Do you have the never oom extension for unet enabled?

yacinesh commented 8 months ago

@Postmoderncaliban I don't even know what oom extension is

mongolsteppe commented 8 months ago

@yacinesh Did you run into any errors that you had to fix to get this to work? Several users are unable to run forgeui with an AMD GPU to begin with (me included, I get the same problem as in this issue https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/381 )

For future readers, he's using these arguments in the webui-user.bat file according to sysinfo dump: --directml --skip-torch-cuda-test --always-normal-vram --skip-version-check

catboxanon commented 8 months ago

Task Manager is not going to show utilization of the GPU correctly by default for most ML processes, or at least that is the case with NVIDIA cards. The graph needs to be changed to Cuda (dropdown next to the name of the graph) to show the utilization. I'm not sure if this the same for AMD cards though. You may need to use something like AMD System Monitor or GPU-Z instead.

303Aki303 commented 8 months ago

@yacinesh Did you run into any errors that you had to fix to get this to work? Several users are unable to run forgeui with an AMD GPU to begin with (me included, I get the same problem as in this issue #381 )

For future readers, he's using these arguments in the webui-user.bat file according to sysinfo dump: --directml --skip-torch-cuda-test --always-normal-vram --skip-version-check

hi, I use AMD and can't get forge ui to work, I used the arguments you said and got " No module named 'torch_directml' ". any idea how to fix it?

patientx commented 8 months ago

on cmd :: go to forge folder , call venv\Scripts\activate.bat pip install torch_directml

mlsterpr0 commented 8 months ago

on cmd :: go to forge folder , call venv\Scripts\activate.bat

system cannot find path specified why there has to be venv folder?