Open chrisbward opened 1 year ago
CUDA_VISIBLE_DEVICES=0 python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face samples/out.mp4 --audio samples/combined.wav
Using cuda for inference.
Reading video frames...
Number of frames available for inference: 2770
(80, 8866)
Length of mel chunks: 2767
0%| | 0/22 [00:00<?, ?it/s]
Frozen - seems to be using Cuda as per message above
CUDA_VISIBLE_DEVICES=0 python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face samples/out.mp4 --audio samples/combined.wav Using cuda for inference. Reading video frames... Number of frames available for inference: 2770 (80, 8866) Length of mel chunks: 2767 0%| | 0/22 [00:00<?, ?it/s]
Frozen - seems to be using Cuda as per message above
Did you solve this? I am having the same issue.
The program is using GPU. I can see graphic memory is used and cuda is used from the performance tab. But it's taking 53 hours to process 900 frames. It doesn't make sense.
CUDA_VISIBLE_DEVICES=0 python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face samples/out.mp4 --audio samples/combined.wav Using cuda for inference. Reading video frames... Number of frames available for inference: 2770 (80, 8866) Length of mel chunks: 2767 0%| | 0/22 [00:00<?, ?it/s]
Frozen - seems to be using Cuda as per message above
Did you solve this? I am having the same issue.
The program is using GPU. I can see graphic memory is used and cuda is used from the performance tab. But it's taking 53 hours to process 900 frames. It doesn't make sense.
Same here, I used to have it on my i5 9400f 2060 PC, now that I got a 7950x with a 4090 with the best NVMe's, it's showing the usage of my full RAM, SSD NVMe and 50% of my CPU, the process takes 30 minutes, whereas it took like 30/60 seconds on my old one. Also what's displayed on my Anaconda prompt is much different.
Install cuda version of the following (pip install torch is the cpu version): pip install torch==1.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html pip install torchvision==0.11.3+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
OMGGGG IT WORKED THANK YOU BROO
Yes, if anyone know what GPU+CPU+ Memory+SDD set can play a normal case?
What's up with that?