Closed zachysaur closed 2 days ago
You need to change device type from config file (inference.device). If you are using gpu you have to set device as cuda, if not you have to set as cpu. In default, i set as mps which is for macbook.
Microsoft Windows [Version 10.0.19045.5011] (c) Microsoft Corporation. All rights reserved.
F:\talker\liveportrait_talker-main>venv\scripts\activate
(venv) F:\talker\liveportrait_talker-main>python inference.py --config_path config.yaml --source_path "F:\talker\liveportrait_talker-main\Face\1.png" --audio_path "F:\talker\liveportrait_talker-main\Audio\2.wav" --save_path "F:\talker\liveportrait_talker-main\Audio\1.mp4"
Config File is loaded succesfully!
F:\talker\liveportrait_talker-main\venv\lib\site-packages\torchvision\models_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
F:\talker\liveportrait_talker-main\venv\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None
for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=None
.
warnings.warn(msg)
Pipeline Objects are initialized!
LLVM ERROR: Symbol not found: __svml_cosf8_ha
(venv) F:\talker\liveportrait_talker-main>
Hi, I haven't seen this issue before. I checked library versions too, everything is seen okay and works fine. You can try this solution.
fixed it somehow i will make a fork and see if these changes needed
https://youtu.be/6aS-Ui37o-o lol u did models changes after i made video
there also two payyaml versions in requirements cause error
Wow, thanks for yout support. I haven't seen this video sorry 😄. I also realized that head pose is not working in your video, I updated the pipeline. I guess I solved the issus.
Also, requirements.txt problem is solved too.
Thanks for your feedback too.
i am trying to make gradio of it i am not good at it but somehow i manage to make things work
Gradio is on my pipeline too, but I don't know when can I complete it because of my work.
(venv) F:\talker\liveportrait_talker-main>python inference.py --config_path config.yaml --source_path "F:\talker\liveportrait_talker-main\Face\1.png" --audio_path "F:\talker\liveportrait_talker-main\Audio\2.wav" --save_path "F:\talker\liveportrait_talker-main\Audio" Config File is loaded succesfully! Traceback (most recent call last): File "F:\talker\liveportrait_talker-main\inference.py", line 78, in
main(args)
File "F:\talker\liveportrait_talker-main\inference.py", line 23, in main
preprocess = Preprocess(device=cfg.device,
File "F:\talker\liveportrait_talker-main\src\modules\preprocess.py", line 28, in init
self.sd_prep = SadTalkerPreprocess(device=device,
File "F:\talker\liveportrait_talker-main\src\utils\preprocess\sadtalker_preprocess.py", line 20, in init
self.detector = init_alignment_model('awing_fan', device=device, model_rootpath=model_path)
File "F:\talker\liveportrait_talker-main\src\utils\preprocess\helper.py", line 149, in init_alignment_model
model = model.to(device)
File "F:\talker\liveportrait_talker-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "F:\talker\liveportrait_talker-main\venv\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "F:\talker\liveportrait_talker-main\venv\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "F:\talker\liveportrait_talker-main\venv\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "F:\talker\liveportrait_talker-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
RuntimeError: PyTorch is not linked with support for mps devices
(venv) F:\talker\liveportrait_talker-main>