lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.8k stars 186 forks source link

[Bug]: Vid2Vid expecting Torch Cuda - any workaround? #203

Closed GoZippy closed 7 months ago

GoZippy commented 1 year ago

Is there an existing issue for this?

What happened?

Installed text2video extension for auto1111, went to try video2video vid2vid and I get the following error:

text2video — The model selected is:  ModelScope
 text2video extension for auto1111 webui
Git commit: c1da6b1d
Starting text2video
Pipeline setup
config namespace(framework='pytorch', task='text-to-video-synthesis', model={'type': 'latent-text-to-video-synthesis', 'model_args': {'ckpt_clip': 'open_clip_pytorch_model.bin', 'ckpt_unet': 'text2video_pytorch_model.pth', 'ckpt_autoencoder': 'VQGAN_autoencoder.pth', 'max_frames': 16, 'tiny_gpu': 1}, 'model_cfg': {'unet_in_dim': 4, 'unet_dim': 320, 'unet_y_dim': 768, 'unet_context_dim': 1024, 'unet_out_dim': 4, 'unet_dim_mult': [1, 2, 4, 4], 'unet_num_heads': 8, 'unet_head_dim': 64, 'unet_res_blocks': 2, 'unet_attn_scales': [1, 0.5, 0.25], 'unet_dropout': 0.1, 'temporal_attention': 'True', 'num_timesteps': 1000, 'mean_type': 'eps', 'var_type': 'fixed_small', 'loss_type': 'mse'}}, pipeline={'type': 'latent-text-to-video-synthesis'})
device privateuseone:0
got a request to *vid2vid* an existing video.
Trying to extract frames from video with input FPS of 29.99948471164809. Please wait patiently.
Successfully extracted 22880.0 frames from video.
Loading frames: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 251/251 [00:07<00:00, 34.24it/s]
Converted the frames to tensor (1, 251, 3, 512, 512)
Traceback (most recent call last):
  File "E:\ImageAI\tiger\tiger2023-05-09\stable-diffusion-webui-directml/extensions/sd-webui-text2video/scripts\t2v_helpers\render.py", line 27, in run
    vids_pack = process_modelscope(args_dict)
  File "E:\ImageAI\tiger\tiger2023-05-09\stable-diffusion-webui-directml/extensions/sd-webui-text2video/scripts\modelscope\process_modelscope.py", line 122, in process_modelscope
    vd_out = torch.from_numpy(bcfhw).to("cuda")
  File "E:\ImageAI\tiger\tiger2023-05-09\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Exception occurred: Torch not compiled with CUDA enabled

Steps to reproduce the problem

see above

using master branch commit https://github.com/lshqqytiger/stable-diffusion-webui-directml.git master ebf229bd Sat Jun 3 17:09:55 2023

What should have happened?

looking for CPU only option or AMD option - using GPU1 AMD Radeon RX 6700 XT (also have GPU2 RX580 to test ).

Would there be any options for setting modelscope to force use cpu or any work you can find to implement AMD use instead of tensor NVIDIA only?

Version or Commit where the problem happens

Version on UI is not showing - it is missing -> version:  •  python: 3.10.6  •  torch: 2.0.0+cpu  •  xformers: N/A  •  gradio: 3.31.0  •  checkpoint: b8d0dc8489

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

AMD GPUs (RX 6000 above)

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512
set COMMANDLINE_ARGS=--medvram --opt-sdp-attention --disable-safe-unpickle
set SAFETENSORS_FAST_GPU=1

call webui.bat
y

List of extensions

Extension URL Branch Version Date Update SD-CN-Animation https://github.com/volotat/SD-CN-Animation.git main 2e257bbf Tue May 30 16:49:03 2023 latest animator_extension https://github.com/Animator-Anon/animator_extension.git main e4b8395c Fri May 19 08:57:39 2023 new commits deforum-for-automatic1111-webui https://github.com/deforum-art/deforum-for-automatic1111-webui.git automatic1111-webui b58056f9 Tue Jun 6 21:47:35 2023 new commits ebsynth_utility https://github.com/s9roll7/ebsynth_utility.git main 44c18e0e Wed May 31 07:02:22 2023 latest gif2gif https://github.com/LonicaMewinsky/gif2gif.git main 5121851e Sat May 13 14:56:04 2023 latest infinite-zoom-automatic1111-webui https://github.com/v8hid/infinite-zoom-automatic1111-webui.git main 1139764b Wed May 3 17:08:52 2023 new commits openpose-editor https://github.com/fkunn1326/openpose-editor.git master 722bca6f Sat Jun 3 04:54:52 2023 latest sd-3dmodel-loader https://github.com/jtydhr88/sd-3dmodel-loader.git master 4fb3a1b5 Wed May 24 17:05:58 2023 new commits sd-canvas-editor https://github.com/jtydhr88/sd-canvas-editor.git master 248d112d Wed May 24 17:08:03 2023 new commits sd-extension-steps-animation https://github.com/vladmandic/sd-extension-steps-animation.git main 13e5b455 Sun May 21 14:28:21 2023 new commits sd-webui-3d-open-pose-editor https://github.com/nonnonstop/sd-webui-3d-open-pose-editor.git main f2d5aac5 Sat Apr 15 13:21:06 2023 latest sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git main 2598ca9e Thu Jun 8 01:27:33 2023 new commits sd-webui-panorama-viewer https://github.com/GeorgLegato/sd-webui-panorama-viewer.git main 2a8195f8 Mon May 29 02:31:41 2023 latest sd-webui-prompt-all-in-one https://github.com/Physton/sd-webui-prompt-all-in-one main 7b9484c1 Wed Jun 7 02:29:49 2023 new commits sd-webui-text2video https://github.com/deforum-art/sd-webui-text2video.git main c1da6b1d Wed Jun 7 16:24:52 2023 new commits sd_save_intermediate_images https://github.com/AlUlkesh/sd_save_intermediate_images.git main 8115a847 Mon Mar 27 13:58:26 2023 latest stable-diffusion-webui-depthmap-script https://github.com/thygate/stable-diffusion-webui-depthmap-script.git main e394d38c Tue May 30 00:18:06 2023 new commits stable-diffusion-webui-model-toolkit https://github.com/arenasys/stable-diffusion-webui-model-toolkit.git master 4d8fea77 Sun May 14 09:09:42 2023 latest training-picker https://github.com/Maurdekye/training-picker.git master d2784b09 Fri Apr 14 23:30:42 2023 latest unprompted https://github.com/ThereforeGames/unprompted.git main 1fb71688 Sat May 13 23:38:30 2023 new commits video_loopback_for_webui https://github.com/fishslot/video_loopback_for_webui.git main 4e0bf8c5 Fri Apr 21 12:08:15 2023 latest LDSR built-in None Tue Jul 25 00:15:22 2023
Lora built-in None Tue Jul 25 00:15:22 2023
ScuNET built-in None Tue Jul 25 00:15:22 2023
SwinIR built-in None Tue Jul 25 00:15:22 2023
prompt-bracket-checker built-in None Tue Jul 25 00:15:22 2023

Console logs

see above - logs are above

Additional information

No response

lshqqytiger commented 1 year ago

The extensions are out of my range. Strictly speaking, I can hijack functions or something of the extension, but it is way beyond the abstraction. Extension is extension. Extension should not be touched by WebUI itself. You can request the creator/contributor of the extension to modify "cuda" to the correct device. Or you can fork that extension and edit yourself. I'm providing some methods like torch.dml.default_device() and torch.dml.current_device().

# A best way
from modules.devices import device

...
vd_out = torch.from_numpy(bcfhw).to(device)

# Second way
import torch

if hasattr(torch, "dml"): # DirectML initialized.
  vd_out = torch.from_numpy(bcfhw).to(torch.dml.current_device())
else:
  vd_out = torch.from_numpy(bcfhw).to("cuda")