kabachuha / sd-webui-text2video

Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
Other
1.28k stars 106 forks source link

[Bug]: #168

Closed CreateLab closed 1 year ago

CreateLab commented 1 year ago

Is there an existing issue for this?

Are you using the latest version of the extension?

What happened?

I have 6gb vram rtx 3060 mobile. But when i try to use text to video i have a problem with out of mem.

Steps to reproduce the problem

  1. Run stable diffusion a) .\webui.bat b) .\webui.bat --xformers tests both variants
  2. Go so settings a) click low vram b) don't click low vram test both
  3. Check powershell
    torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.20 GiB already allocated; 0 bytes free; 5.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    Exception occurred: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.20 GiB already allocated; 0 bytes free; 5.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    Interrupted with signal 2 in <frame at 0x00000215C884ACE0, file 'C:\\path\\stable-diffusion-webui\\webui.py', line 206, code wait_on_server>

What should have happened?

I wanna get the video.

WebUI and Deforum extension Commit IDs

webui commit id - a9fed7c3 txt2vid commit id - a8937ba

Torch version

tensorboard 2.10.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.10.1 tensorflow-estimator 2.10.0 tensorflow-io-gcs-filesystem 0.31.0 termcolor 2.2.0 typing_extensions 4.5.0

What GPU were you using for launching?

3060 mobile

On which platform are you launching the webui backend with the extension?

Local PC setup (Windows)

Settings

image

Console logs

text2video extension for auto1111 webui
Git commit: a8937baf (Sun May 21 22:32:27 2023)
Starting text2video
Pipeline setup
config namespace(framework='pytorch', task='text-to-video-synthesis', model={'type': 'latent-text-to-video-synthesis', 'model_args': {'ckpt_clip': 'open_clip_pytorch_model.bin', 'ckpt_unet': 'text2video_pytorch_model.pth', 'ckpt_autoencoder': 'VQGAN_autoencoder.pth', 'max_frames': 16, 'tiny_gpu': 1}, 'model_cfg': {'unet_in_dim': 4, 'unet_dim': 320, 'unet_y_dim': 768, 'unet_context_dim': 1024, 'unet_out_dim': 4, 'unet_dim_mult': [1, 2, 4, 4], 'unet_num_heads': 8, 'unet_head_dim': 64, 'unet_res_blocks': 2, 'unet_attn_scales': [1, 0.5, 0.25], 'unet_dropout': 0.1, 'temporal_attention': 'True', 'num_timesteps': 1000, 'mean_type': 'eps', 'var_type': 'fixed_small', 'loss_type': 'mse'}}, pipeline={'type': 'latent-text-to-video-synthesis'})
Traceback (most recent call last):
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui/extensions/sd-webui-text2video/scripts\t2v_helpers\render.py", line 24, in run
    vids_pack = process_modelscope(args_dict)
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui/extensions/sd-webui-text2video/scripts\modelscope\process_modelscope.py", line 55, in process_modelscope
    pipe = setup_pipeline()
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui/extensions/sd-webui-text2video/scripts\modelscope\process_modelscope.py", line 26, in setup_pipeline
    return TextToVideoSynthesis(ph.models_path + '/ModelScope/t2v')
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui/extensions/sd-webui-text2video/scripts\modelscope\t2v_pipeline.py", line 86, in __init__
    torch.load(
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\modules\safe.py", line 106, in load
    return load_with_extra(filename, extra_handler=global_extra_handler, *args, **kwargs)
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\modules\safe.py", line 151, in load_with_extra
    return unsafe_torch_load(filename, *args, **kwargs)
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 789, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1131, in _load
    result = unpickler.load()
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1101, in persistent_load
    load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1083, in load_tensor
    wrap_storage=restore_location(storage, location),
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 215, in default_restore_location
    result = fn(storage, location)
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 187, in _cuda_deserialize
    return obj.cuda(device)
  File "C:\Users\f98f9\stabdiff\stable-diffusion-webui\venv\lib\site-packages\torch\_utils.py", line 80, in _cuda
    untyped_storage = torch.UntypedStorage(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.20 GiB already allocated; 0 bytes free; 5.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Exception occurred: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.20 GiB already allocated; 0 bytes free; 5.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

No response

github-actions[bot] commented 1 year ago

This issue has been closed due to incorrect formatting. Please address the following mistakes and reopen the issue: