Open deeplearn-art opened 1 year ago
seems installation of "opencv_python==4.5.1.48" failed on your system, you may try other opencv_python versions that are compatible on your machine, by modifying it in requirements.txt. Specific version of opencv_python require certain system dependencies and the version doesn't matter that much.
I get this error as well. This seems to fix it, even though there are other errors after this.
updated stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\requirements.txt
opencv_python==4.5.1.48
opencv-contrib-python==4.3.0.36
to
opencv-python==4.7.0.72
opencv-contrib-python==4.7.0.72
I am able to get past the installing of dependencies even with failures, but the extension is still able to start. However, upon final startup, this error now pops up.
Traceback (most recent call last):
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\modules\scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\scripts\main.py", line 8, in <module>
from app_pose import create_demo as create_demo_pose
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\app_pose.py", line 65, in <module>
def on_video_path_update(evt: gr.EventData):
AttributeError: module 'gradio' has no attribute 'EventData'
Okay I got past the previous issue, but now I am running into the following error, and it completely ruins the UI every time. Note, I am running on Low-VRAM but I have seen people fail to load this portion even with 24gb of vram. (I have 8gb, and I couldn't load it with med vram or low vram.)
Error executing callback ui_tabs_callback for C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\scripts\main.py
Traceback (most recent call last):
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\modules\script_callbacks.py", line 126, in ui_tabs_callback
res += c.callback() or []
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\scripts\main.py", line 18, in on_ui_tabs
create_demo_text_to_video(model)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\app_text_to_video.py", line 61, in create_demo
gr.Examples(examples=examples,
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\helpers.py", line 70, in create_examples
utils.synchronize_async(examples_obj.create)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 516, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\fsspec\asyn.py", line 96, in sync
raise return_result
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\fsspec\asyn.py", line 53, in _runner
result[0] = await coro
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\helpers.py", line 277, in create
await self.cache()
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\helpers.py", line 311, in cache
prediction = await Context.root_block.process_api(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\model.py", line 286, in process_text2video
result = self.inference(prompt=[prompt],
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\model.py", line 100, in inference
return self.pipe(generator=self.generator, **kwargs).videos[0]
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\text_to_video\text_to_video_pipeline.py", line 372, in __call__
ddim_res = self.DDIM_backward(num_inference_steps=num_inference_steps, timesteps=timesteps, skip_t=t1, t0=-1, t1=-1, do_classifier_free_guidance=do_classifier_free_guidance,
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\text_to_video\text_to_video_pipeline.py", line 164, in DDIM_backward
noise_pred = self.unet(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 582, in forward
sample, res_samples = downsample_block(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 837, in forward
hidden_states = attn(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\transformer_2d.py", line 265, in forward
hidden_states = block(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\attention.py", line 291, in forward
attn_output = self.attn1(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\cross_attention.py", line 205, in forward
return self.processor(
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\extensions\Text2Video-Zero-sd-webui\utils.py", line 184, in __call__
attention_probs = attn.get_attention_scores(query, key, attention_mask)
File "C:\Users\maria\Downloads\Architecture\NovelAI\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\cross_attention.py", line 234, in get_attention_scores
baddbmm_input = torch.empty(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 8.00 GiB total capacity; 2.65 GiB already allocated; 2.56 GiB free; 3.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Could be a problem with my setup, but haven't encountered anything like it with other sd-webui extensions