YuanxunLu / LiveSpeechPortraits

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)
MIT License
1.16k stars 200 forks source link

cpu inference #14

Closed AK391 closed 2 years ago

AK391 commented 2 years ago

please add option for cpu inference in demo.py

YuanxunLu commented 2 years ago

I‘ve added an option "--device" for CPU inference. Feel free if you have any questions.

AK391 commented 2 years ago

@YuanxunLu thanks i still get this error when using --device cpu

Traceback (most recent call last): File "demo.py", line 27, in from funcs import utils File "/content/LiveSpeechPortraits/funcs/utils.py", line 60, in n_mel_channels=80, mel_fmin=90, mel_fmax=7600.0).cuda() File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 637, in cuda return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 573, in _apply self._buffers[key] = fn(buf) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 637, in return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.7/dist-packages/torch/cuda/init.py", line 172, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

YuanxunLu commented 2 years ago

Thanks for your feedback. I've modified the functions and please check the latest version.

AK391 commented 2 years ago

@YuanxunLu thanks i am getting this error now

Traceback (most recent call last): File "demo.py", line 130, in Featopt = FeatureOptions().parse() File "/content/LiveSpeechPortraits/options/base_options_audio2feature.py", line 158, in parse torch.cuda.set_device(opt.gpu_ids[0]) File "/usr/local/lib/python3.7/dist-packages/torch/cuda/init.py", line 264, in set_device torch._C._cuda_setDevice(device) File "/usr/local/lib/python3.7/dist-packages/torch/cuda/init.py", line 172, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

YuanxunLu commented 2 years ago

I‘ve commented these codes in the latest version. Check it if it has any questions.

wanghaisheng commented 2 years ago

@YuanxunLu
pls change these code and then cpu inference can work FYI pytorch cpu version is required

1.lossess.py

return current_sample.reshape(b, T, -1).cuda()

return  current_sample.reshape(b, T, -1)

2.demo.py

if torch.cuda.is_available(): map_location=lambda storage, loc: storage.cuda() else: map_location='cpu'

APC_model.load_state_dict(torch.load(config['model_params']['APC']['ckp_path'],map_location=map_location), strict=False)