nod-ai / SHARK-Studio

SHARK Studio -- Web UI for SHARK+IREE High Performance Machine Learning Distribution
Apache License 2.0
1.42k stars 170 forks source link

Can't run SDXL Turbo at all. My 32GB of RAM at 100% and fails. #2079

Open MikeStirner opened 10 months ago

MikeStirner commented 10 months ago

What is going wrong: When I run SDXL Turbo it never works. The RAM gets within 200 MB of being full (might even hit full?), which is odd. And that's when it fails. I normally only use 6 GB of my 32 GB of RAM. I did originally install SHARK earlier today before realizing my Driver were too old and so I updated them and did the --clear_all flag. I noticed this: ERROR: Exception in ASGI application But the other errors are unclear to me.

What I tried already: I have done the --clear_all flag and get the same results. Tried removing the hugging face files so they had to redownload and that didn't help. Also tried running in command prompt. Also tried running as admin.

OS: Windows 10

GPU: RX 6800

VRAM: 16 GB

GPU driver: Driver Version 23.30.13.03-231122a-397541C-AMD-Software-PRO-Edition

RAM: 32 GB

Log attached. 2024-01-27T08_42_11_063 Shark error10.txt

MikeStirner commented 10 months ago

My SHARK filename is: nodai_shark_studio_20240109_1118.exe So that's the version I guess?

MikeStirner commented 10 months ago

Tried some flags (from the "Target" field of the shortcut I made): C:\Installs\SHARK\nodai_shark_studio_20240109_1118.exe --vulkan_large_heap_block_size=0 --use_base_vae No difference. So then, I used the dropdown in the app to change the default VAE to None.

That time is got farther than before but gave me a Memory error:

(left out above because it looked the exact same) Looking into gs://shark_tank/SDXL/mlir/unet_1_77_512_512_fp16_sdxl-turbo.mlir torch\fx\node.py:272: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph. warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.") saving unet_1_77_512_512_fp16_sdxl-turbo_vulkan_torch_linalg.mlir to .\shark_tmp No vmfb found. Compiling and saving to C:\Installs\SHARK\unet_1_77_512_512_fp16_sdxl-turbo_vulkan.vmfb Configuring for device:vulkan://00000000-0300-0000-0000-000000000000 Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args Exception in thread Thread-50 (_readerthread): Traceback (most recent call last): File "threading.py", line 1038, in _bootstrap_inner File "threading.py", line 975, in run File "subprocess.py", line 1552, in _readerthread MemoryError