lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.72k stars 739 forks source link

Not working on AMD card after trying a few things listed on here. #537

Open ZeroNyte opened 6 months ago

ZeroNyte commented 6 months ago

Checklist

What happened?

Trying to run forge with my 6750 XT, to no avail, tried:

-COMMANDLINE_ARGS= --directml --skip-torch-cuda-test --always-normal-vram --skip-version-check -comment out all the @torch.inference_mode() (add # before them) on: \ldm_patched\modules\utils.py - line 407 \modules_forge\forge_loader.py - line 236, line 242 change "with torch.inference_mode():" for "with torch.no_grad():" on: \modules\processing.py - line 817

CPU and GPU seed gen, restarted my pc, nothing. only using a single SDXL checkpoint.

the only thing I get as a result is: typeerror: 'nonetype' object is not iterable

Steps to reproduce the problem

AMD gpu..

What should have happened?

should work from what I have seen on here

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-03-11-18-07.json

Console logs

venv "G:\StableDiffed\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --directml --skip-torch-cuda-test --always-normal-vram --skip-version-check
Using directml with device:
Total VRAM 1024 MB, total RAM 32672 MB
Set vram state to: NORMAL_VRAM
Device: privateuseone
VAE dtype: torch.float32
CUDA Stream Activated:  False
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
reading lora G:\StableDiffed\stable-diffusion-webui-forge\models\Lora\shark_tailv1.safetensors: UnicodeDecodeError
Traceback (most recent call last):
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\network.py", line 38, in __init__
    self.metadata = cache.cached_data_for_file('safetensors-metadata', "lora/" + self.name, filename, read_metadata)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\cache.py", line 114, in cached_data_for_file
    value = func()
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\network.py", line 31, in read_metadata
    metadata = sd_models.read_metadata_from_safetensors(filename)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 290, in read_metadata_from_safetensors
    json_obj = json.loads(json_data)
  File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 341, in loads
    s = s.decode(detect_encoding(s), 'surrogatepass')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xac in position 121317: invalid start byte

ControlNet preprocessor location: G:\StableDiffed\stable-diffusion-webui-forge\models\ControlNetPreprocessor
Loading weights [d3ee23d452] from G:\StableDiffed\stable-diffusion-webui-forge\models\Stable-diffusion\autismmixSDXL_autismmixDPO.safetensors
2024-03-11 19:06:21,639 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
Startup time: 11.7s (prepare environment: 1.0s, import torch: 3.8s, import gradio: 1.1s, setup paths: 1.0s, initialize shared: 0.1s, other imports: 0.5s, list SD models: 0.1s, load scripts: 2.9s, create ui: 0.6s, gradio launch: 0.5s).
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
loading stable diffusion model: NameError
Traceback (most recent call last):
  File "G:\StableDiffed\stable-diffusion-webui-forge\launch.py", line 51, in <module>
    main()
  File "G:\StableDiffed\stable-diffusion-webui-forge\launch.py", line 47, in main
    start()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\launch_utils.py", line 549, in start
    main_thread.loop()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 509, in get_sd_model
    load_model()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 585, in load_model
    sd_model = forge_loader.load_model_for_a1111(timer=timer, checkpoint_info=checkpoint_info, state_dict=state_dict)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules_forge\forge_loader.py", line 250, in load_model_for_a1111
    sd_model.decode_first_stage = patched_decode_first_stage
NameError: name 'patched_decode_first_stage' is not defined

Stable diffusion model failed to load
Loading weights [d3ee23d452] from G:\StableDiffed\stable-diffusion-webui-forge\models\Stable-diffusion\autismmixSDXL_autismmixDPO.safetensors
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
loading stable diffusion model: NameError
Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui.py", line 1178, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\shared_items.py", line 133, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 509, in get_sd_model
    load_model()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 585, in load_model
    sd_model = forge_loader.load_model_for_a1111(timer=timer, checkpoint_info=checkpoint_info, state_dict=state_dict)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules_forge\forge_loader.py", line 250, in load_model_for_a1111
    sd_model.decode_first_stage = patched_decode_first_stage
NameError: name 'patched_decode_first_stage' is not defined

Stable diffusion model failed to load
Loading weights [d3ee23d452] from G:\StableDiffed\stable-diffusion-webui-forge\models\Stable-diffusion\autismmixSDXL_autismmixDPO.safetensors
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
loading stable diffusion model: NameError
Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 703, in pages_html
    create_html()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 699, in create_html
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 699, in <listcomp>
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 518, in create_html
    self.items = {x["name"]: x for x in items_list}
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 518, in <dictcomp>
    self.items = {x["name"]: x for x in items_list}
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\ui_extra_networks_lora.py", line 82, in list_items
    item = self.create_item(name, index)
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\ui_extra_networks_lora.py", line 69, in create_item
    elif shared.sd_model.is_sdxl and sd_version != network.SdVersion.SDXL:
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\shared_items.py", line 133, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 509, in get_sd_model
    load_model()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 585, in load_model
    sd_model = forge_loader.load_model_for_a1111(timer=timer, checkpoint_info=checkpoint_info, state_dict=state_dict)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules_forge\forge_loader.py", line 250, in load_model_for_a1111
    sd_model.decode_first_stage = patched_decode_first_stage
NameError: name 'patched_decode_first_stage' is not defined

Stable diffusion model failed to load
Loading weights [d3ee23d452] from G:\StableDiffed\stable-diffusion-webui-forge\models\Stable-diffusion\autismmixSDXL_autismmixDPO.safetensors
Traceback (most recent call last):
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 703, in pages_html
    create_html()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 699, in create_html
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 699, in <listcomp>
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 518, in create_html
    self.items = {x["name"]: x for x in items_list}
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 518, in <dictcomp>
    self.items = {x["name"]: x for x in items_list}
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\ui_extra_networks_lora.py", line 82, in list_items
    item = self.create_item(name, index)
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\ui_extra_networks_lora.py", line 69, in create_item
    elif shared.sd_model.is_sdxl and sd_version != network.SdVersion.SDXL:
AttributeError: 'NoneType' object has no attribute 'is_sdxl'
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
loading stable diffusion model: NameError
Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 703, in pages_html
    create_html()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 699, in create_html
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 699, in <listcomp>
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 518, in create_html
    self.items = {x["name"]: x for x in items_list}
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 518, in <dictcomp>
    self.items = {x["name"]: x for x in items_list}
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\ui_extra_networks_lora.py", line 82, in list_items
    item = self.create_item(name, index)
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\ui_extra_networks_lora.py", line 69, in create_item
    elif shared.sd_model.is_sdxl and sd_version != network.SdVersion.SDXL:
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\shared_items.py", line 133, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 509, in get_sd_model
    load_model()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\sd_models.py", line 585, in load_model
    sd_model = forge_loader.load_model_for_a1111(timer=timer, checkpoint_info=checkpoint_info, state_dict=state_dict)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules_forge\forge_loader.py", line 250, in load_model_for_a1111
    sd_model.decode_first_stage = patched_decode_first_stage
NameError: name 'patched_decode_first_stage' is not defined

Stable diffusion model failed to load
Traceback (most recent call last):
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "G:\StableDiffed\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 703, in pages_html
    create_html()
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 699, in create_html
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 699, in <listcomp>
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 518, in create_html
    self.items = {x["name"]: x for x in items_list}
  File "G:\StableDiffed\stable-diffusion-webui-forge\modules\ui_extra_networks.py", line 518, in <dictcomp>
    self.items = {x["name"]: x for x in items_list}
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\ui_extra_networks_lora.py", line 82, in list_items
    item = self.create_item(name, index)
  File "G:\StableDiffed\stable-diffusion-webui-forge\extensions-builtin\Lora\ui_extra_networks_lora.py", line 69, in create_item
    elif shared.sd_model.is_sdxl and sd_version != network.SdVersion.SDXL:
AttributeError: 'NoneType' object has no attribute 'is_sdxl'

Additional information

No response

Postmoderncaliban commented 6 months ago

Looks like your graphics card doesn't get recognized. Are you sure that torch is correctly installed?

ZeroNyte commented 6 months ago

As far as I know it should be, went into venv, activated and installed it there. As it should be right?

Postmoderncaliban commented 6 months ago

What command did you use to install it?

ZeroNyte commented 6 months ago

oh wait I installed torch-directml, but not torch itself, shouldve done that on first startup/run right? or did I miss anything in that regard?

Postmoderncaliban commented 6 months ago

No, torch-directml is correct. Can you drop the skip-cuda argument and see what error it gives you now?

ZeroNyte commented 6 months ago

RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Postmoderncaliban commented 6 months ago

Try deleting the venv folder, create a new one, go into the virtual enviroment and reinstall torch with >pip install torch-directml

ZeroNyte commented 6 months ago

do I create it with running webui-user.bat or just via command line? python -m venv mkdir projectA cd projectA python3.8 -m venv env

Postmoderncaliban commented 6 months ago

Commandline is fine. Btw, you might wanna update python as well.

ZeroNyte commented 6 months ago

so made a new venv, and get: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. again

also when I add it back, and try something this pops up in the terminal: AttributeError: 'NoneType' object has no attribute 'is_sdxl' and just shows in the prompt field "?/?"

Postmoderncaliban commented 6 months ago

try switching rng from gpu to cpu

ZeroNyte commented 6 months ago

it already was

ZeroNyte commented 6 months ago

and now it works for some reason, didnt change anything in the last hour... but I'll try again tomorrow, see if it acts up then.

ah... nvm.. had the wrong UI folder opened.. guess that's what you get for having more than 1 UI installed..

DGdev91 commented 6 months ago

Wait a sec.... Automatic1111's SD WebUI points to a different fork for AMD on Windows, wich is lshqqytiger's, with DirectML support: https://github.com/lshqqytiger/stable-diffusion-webui-directml But this project is forked from the default WebUI, not lshqqytiger's fork. It's probably missing all the DirectML stuff. ....or better: there's indeed some DirectML stuff inside Forge, but there's also some reports saying it doesn't work properly https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/58 https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/570

So, your best bet until full ROCm is released on Windows, is either using lshqqytiger's fork or Linux

ZeroNyte commented 6 months ago

So been trying a couple different UIs, and A1111 directml with lobe extension, had been working for the most part, sometimes it stops eith certain prompts, but still currently running 832x1216 without much issue except that it's slow, takes about 10 mins for that res with 25 steps

mr-september commented 6 months ago

Still can't get Forge to work, this system has ishqqytiger's fork working.

Forge: Running with --directml --skip-cuda-test, then manually activate venv to pip install torch_directml, get this:

Using directml with device:
Total VRAM 1024 MB, total RAM 32688 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Set vram state to: LOW_VRAM
Device: privateuseone
VAE dtype: torch.float32
CUDA Stream Activated:  False
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
==============================================================================
You are running torch 2.0.0+cpu.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag Using directml with device:
Total VRAM 1024 MB, total RAM 32688 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Set vram state to: LOW_VRAM
Device: privateuseone
VAE dtype: torch.float32
CUDA Stream Activated:  False
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
==============================================================================
You are running torch 2.0.0+cpu.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check..
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.

I run again with --reinstall-torch, get this:

ImportError: DLL load failed while importing torch_directml_native: The specified procedure could not be found.
Press any key to continue . . .

So I go back and manually activate venv to pip install torch_directml again, ok, browser UI appears, back to getting the same CPU indication as above:

Using directml with device:
Total VRAM 1024 MB, total RAM 32688 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Set vram state to: LOW_VRAM
Device: privateuseone
VAE dtype: torch.float32
CUDA Stream Activated:  False
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
==============================================================================
You are running torch 2.0.0+cpu.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

So i try to generate any random thing in text to image, without changing any settings:

Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.00 seconds
Model loaded in 3.2s (load weights from disk: 0.4s, forge load real models: 2.1s, calculate empty prompt: 0.6s).
Traceback (most recent call last):
  File "E:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "E:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "E:\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "E:\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "E:\stable-diffusion-webui-forge\modules\processing.py", line 848, in process_images_inner
    p.rng = rng.ImageRNG((opt_C, p.height // opt_f, p.width // opt_f), p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, seed_resize_from_h=p.seed_resize_from_h, seed_resize_from_w=p.seed_resize_from_w)
  File "E:\stable-diffusion-webui-forge\modules\rng.py", line 114, in __init__
    self.generators = [create_generator(seed) for seed in seeds]
  File "E:\stable-diffusion-webui-forge\modules\rng.py", line 114, in <listcomp>
    self.generators = [create_generator(seed) for seed in seeds]
  File "E:\stable-diffusion-webui-forge\modules\rng.py", line 86, in create_generator
    generator = torch.Generator(device).manual_seed(int(seed))
RuntimeError: Device type privateuseone is not supported for torch.Generator() api.
Device type privateuseone is not supported for torch.Generator() api.
*** Error completing request
*** Arguments: ('task(vmmt19p6bj4fhn2)', <gradio.routes.Request object at 0x0000024889B43FD0>, 'carbon nanoparticles', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "E:\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---