Rudrabha / Wav2Lip

This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
https://synclabs.so
10.68k stars 2.28k forks source link

How to use GPU instead of CPU locally for inference.py? #433

Open chrisbward opened 1 year ago

chrisbward commented 1 year ago

What's up with that?

chrisbward commented 1 year ago
CUDA_VISIBLE_DEVICES=0 python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face samples/out.mp4 --audio samples/combined.wav
Using cuda for inference.
Reading video frames...
Number of frames available for inference: 2770
(80, 8866)
Length of mel chunks: 2767
  0%|                                                                                     | 0/22 [00:00<?, ?it/s]

Frozen - seems to be using Cuda as per message above

ttkrpink commented 1 year ago
CUDA_VISIBLE_DEVICES=0 python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face samples/out.mp4 --audio samples/combined.wav
Using cuda for inference.
Reading video frames...
Number of frames available for inference: 2770
(80, 8866)
Length of mel chunks: 2767
  0%|                                                                                     | 0/22 [00:00<?, ?it/s]

Frozen - seems to be using Cuda as per message above

Did you solve this? I am having the same issue.

The program is using GPU. I can see graphic memory is used and cuda is used from the performance tab. But it's taking 53 hours to process 900 frames. It doesn't make sense.

C-dallas commented 1 year ago
CUDA_VISIBLE_DEVICES=0 python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face samples/out.mp4 --audio samples/combined.wav
Using cuda for inference.
Reading video frames...
Number of frames available for inference: 2770
(80, 8866)
Length of mel chunks: 2767
  0%|                                                                                     | 0/22 [00:00<?, ?it/s]

Frozen - seems to be using Cuda as per message above

Did you solve this? I am having the same issue.

The program is using GPU. I can see graphic memory is used and cuda is used from the performance tab. But it's taking 53 hours to process 900 frames. It doesn't make sense.

Same here, I used to have it on my i5 9400f 2060 PC, now that I got a 7950x with a 4090 with the best NVMe's, it's showing the usage of my full RAM, SSD NVMe and 50% of my CPU, the process takes 30 minutes, whereas it took like 30/60 seconds on my old one. Also what's displayed on my Anaconda prompt is much different.

eformx commented 1 year ago

Install cuda version of the following (pip install torch is the cpu version): pip install torch==1.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html pip install torchvision==0.11.3+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

C-dallas commented 1 year ago

OMGGGG IT WORKED THANK YOU BROO

EricKong1985 commented 11 months ago

Yes, if anyone know what GPU+CPU+ Memory+SDD set can play a normal case?