numz / sd-wav2lip-uhq

Wav2Lip UHQ extension for Automatic1111
Apache License 2.0
1.29k stars 172 forks source link

stuck at length of mel chunks 0% every time. #69

Open Comput3rUs3r opened 1 year ago

Comput3rUs3r commented 1 year ago

(a1111) A:\a1111\a1111>webui-user.bat venv "A:\a1111\a1111\venv\Scripts\Python.exe" fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)] Version: 1.6.0 Commit hash: Installing wav2lip_uhq requirement: dlib-bin Installing wav2lip_uhq requirement: opencv-python Installing wav2lip_uhq requirement: pillow Installing wav2lip_uhq requirement: librosa==0.10.0.post2 Installing wav2lip_uhq requirement: opencv-contrib-python Installing wav2lip_uhq requirement: git+https://github.com/suno-ai/bark.git Installing wav2lip_uhq requirement: insightface==0.7.3 Installing wav2lip_uhq requirement: onnx==1.14.0 Installing wav2lip_uhq requirement: onnxruntime==1.15.0 Installing wav2lip_uhq requirement: onnxruntime-gpu==1.15.0 Installing wav2lip_uhq requirement: opencv-python>=4.8.0 Launching Web UI with arguments: no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Loading weights [6ce0161689] from A:\a1111\a1111\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors Running on local URL: http://127.0.0.1:7860 Creating model from config: A:\a1111\a1111\configs\v1-inference.yaml

To create a public link, set share=True in launch(). Startup time: 29.0s (prepare environment: 21.4s, import torch: 2.2s, import gradio: 0.7s, setup paths: 0.5s, initialize shared: 0.2s, other imports: 0.4s, setup codeformer: 0.1s, load scripts: 0.9s, create ui: 0.4s, gradio launch: 2.3s). Applying attention optimization: Doggettx... done. Model loaded in 3.3s (load weights from disk: 0.5s, create model: 0.2s, apply weights to model: 1.3s, apply half(): 0.7s, calculate empty prompt: 0.6s). Using cuda for inference. Reading video frames... Number of frames available for inference: 393 (80, 10241) Length of mel chunks: 3836 0%| | 0/30 [00:00<?, ?it/s] 0%|

Mikerhinos commented 1 year ago

Same for me, then after around 4h it crashes. But then I relaunched the generation it worked... If I restart A1111 it's stuck again for hours :'(

Edit : it seems that even at 720p it's too much for my RTX3070, it's working fine by using a 1080p video and downscaling it by 2x.

It would be cool if we were allowed to input a float value like 1.5 to try 720p from 1080 input.

numz commented 1 year ago

Hi, @Mikerhinos Ok i'll take care about you suggestion.

I don't really understand this issue, it seems you have to load model and kick process and regenerate works.

To bypass, for the first generate tey to use resize factor x4, it will speedup process, and seconde process try only with resize factor x1

Let me know

Comput3rUs3r commented 1 year ago

ok, I tried a different video with lower resolution and it seems to be processing now. I'll update after it finishes.