Markfryazino / wav2lip-hq

Extension of Wav2Lip repository for processing high-quality videos.
534 stars 236 forks source link

Why the inferencing explicitly require to have GPU? #9

Closed jageshmaharjan closed 2 years ago

jageshmaharjan commented 2 years ago

I am trying to do inferencing but to some reason it's explicitly asking for GPU.

my script is: python inference.py --checkpoint_path wav2lip_gan.pth --segmentation_path s3fd-619a316812.pth --sr_path esrgan_yunying.pth --face chroma_video_1.mp4 --audio wave_01.mp3 --outfile result.mp4

and the error is:

File "inference.py", line 296, in main
    seg_net = init_parser(args.segmentation_path)
...................................
...................................
File "/home/ubuntu/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/cuda/__init__.py", line 172, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

on face_parsing/swap.py file, under this function, is calling net.cuda(), how do i change it to cpu?

def init_parser(pth_path):
    n_classes = 19
    net = BiSeNet(n_classes=n_classes)
    net.cuda()
    net.load_state_dict(torch.load(pth_path))
    net.eval()
    return net
jageshmaharjan commented 2 years ago

nvm, i wasn't so familiar with pytorch. simply should be:

def init_parser(pth_path):
    n_classes = 19
    net = BiSeNet(n_classes=n_classes)
    net.cpu()
    net.load_state_dict(torch.load(pth_path, map_location=torch.device('cpu')))
    net.eval()
    return net