numz / sd-wav2lip-uhq

Wav2Lip UHQ extension for Automatic1111
Apache License 2.0
1.16k stars 158 forks source link

Number of frames available for inference: 936 then no run #99

Open aistarman opened 6 months ago

aistarman commented 6 months ago

(base) root@zhixingren:~# cd automatic1111-stable-diffusion-webui (base) root@zhixingren:~/automatic1111-stable-diffusion-webui# python launch.py Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] Commit hash: 0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8 Installing requirements for Web UI Installing wav2lip_uhq requirement: dlib-bin Installing wav2lip_uhq requirement: opencv-python Installing wav2lip_uhq requirement: pillow Installing wav2lip_uhq requirement: librosa==0.10.0.post2 Installing wav2lip_uhq requirement: opencv-contrib-python Installing wav2lip_uhq requirement: git+https://github.com/suno-ai/bark.git Installing wav2lip_uhq requirement: insightface==0.7.3 Installing wav2lip_uhq requirement: onnx==1.14.0 Installing wav2lip_uhq requirement: onnxruntime==1.15.0 Installing wav2lip_uhq requirement: onnxruntime-gpu==1.15.0 Installing wav2lip_uhq requirement: opencv-python>=4.8.0

Launching Web UI with arguments: /root/miniconda3/lib/python3.11/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional. warnings.warn( No module 'xformers'. Proceeding without it. Loading weights [6ce0161689] from /root/automatic1111-stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors Creating model from config: /root/automatic1111-stable-diffusion-webui/configs/v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Applying cross attention optimization (Doggettx). Textual inversion embeddings loaded(0): Model loaded in 5.2s (load weights from disk: 0.4s, create model: 0.4s, apply weights to model: 3.2s, apply half(): 0.2s, move model to device: 0.5s, load textual inversion embeddings: 0.4s). Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().

then i run it,

To create a public link, set share=True in launch(). Using cuda for inference. Reading video frames... Number of frames available for inference: 936 (80, 1321) Length of mel chunks: 491 0%| | 0/4 [00:00<?, ?it/s] 0%| | 0/31 [00:00<?, ?it/s]

no run down ,help

aistarman commented 6 months ago
微信图片_20231214110940
aistarman commented 6 months ago

My system is windows11+wsl+utuntu, many slowly