numz / sd-wav2lip-uhq

Wav2Lip UHQ extension for Automatic1111
Apache License 2.0
1.26k stars 166 forks source link

i get stuck here every time i try it any solution? #67

Open zachysaur opened 1 year ago

zachysaur commented 1 year ago

(webui1111) D:\webui1111\stable-diffusion-webui-master>webui-user.bat venv "D:\webui1111\stable-diffusion-webui-master\venv\Scripts\Python.exe" fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)] Version: 1.6.0 Commit hash: Installing wav2lip_uhq requirement: dlib-bin Installing wav2lip_uhq requirement: opencv-python Installing wav2lip_uhq requirement: pillow Installing wav2lip_uhq requirement: librosa==0.10.0.post2 Installing wav2lip_uhq requirement: opencv-contrib-python Installing wav2lip_uhq requirement: git+https://github.com/suno-ai/bark.git Installing wav2lip_uhq requirement: insightface==0.7.3 Installing wav2lip_uhq requirement: onnx==1.14.0 Installing wav2lip_uhq requirement: onnxruntime==1.15.0 Installing wav2lip_uhq requirement: onnxruntime-gpu==1.15.0 Installing wav2lip_uhq requirement: opencv-python>=4.8.0 Launching Web UI with arguments: no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Loading weights [15012c538f] from D:\webui1111\stable-diffusion-webui-master\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors Creating model from config: D:\webui1111\stable-diffusion-webui-master\configs\v1-inference.yaml Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 124.6s (prepare environment: 72.0s, launcher: 0.2s, import torch: 16.8s, import gradio: 7.7s, setup paths: 7.3s, initialize shared: 0.5s, other imports: 6.7s, setup codeformer: 1.0s, setup gfpgan: 0.2s, load scripts: 7.7s, load upscalers: 0.1s, initialize extra networks: 0.5s, create ui: 2.0s, gradio launch: 2.8s). Applying attention optimization: Doggettx... done. Model loaded in 23.3s (load weights from disk: 2.3s, create model: 0.7s, apply weights to model: 18.2s, load textual inversion embeddings: 1.4s, calculate empty prompt: 0.5s). Using cuda for inference. Reading video frames... Number of frames available for inference: 314 (80, 849) Length of mel chunks: 314 0%| | 0/20 [05:36<?, ?it/s] Recovering from OOM error; New batch size: 8 | 0/20 [00:00<?, ?it/s]

0%| | 0/40 [00:00<?, ?it/s]

numz commented 1 year ago

Try to use "resize factor" and Let me know