Open sugatasanshiro opened 2 hours ago
Did you download the pretrained models?
yes, now it gives this error
(venv) D:\liveportrait_talker>python inference.py --config_path config.yaml --source_path "D:\liveportrait_talker\Face\Tarkan1080.jpg" --audio_path "D:\liveportrait_talker\Audio\MyMemory.wav" --save_path "D:\liveportrait_talker\Output"
Config File is loaded succesfully!
D:\liveportrait_talker\venv\lib\site-packages\torchvision\models_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
D:\liveportrait_talker\venv\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None
for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=None
.
warnings.warn(msg)
Traceback (most recent call last):
File "D:\liveportrait_talker\inference.py", line 78, in
(venv) D:\liveportrait_talker>
i installed pytorch with command
install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
but - requirements.txt - is uninstalling it
when set to - cuda - gives this error
(venv) D:\liveportrait_talker>python inference.py --config_path config.yaml --source_path "D:\liveportrait_talker\Face\Tarkan1080.jpg" --audio_path "D:\liveportrait_talker\Audio\MyMemory.wav" --save_path "D:\liveportrait_talker\Output" Config File is loaded succesfully! Traceback (most recent call last): File "D:\liveportrait_talker\inference.py", line 78, in
main(args)
File "D:\liveportrait_talker\inference.py", line 23, in main
preprocess = Preprocess(device=cfg.device,
File "D:\liveportrait_talker\src\modules\preprocess.py", line 28, in init
self.sd_prep = SadTalkerPreprocess(device=device,
File "D:\liveportrait_talker\src\utils\preprocess\sadtalker_preprocess.py", line 20, in init
self.detector = init_alignment_model('awing_fan', device=device, model_rootpath=model_path)
File "D:\liveportrait_talker\src\utils\preprocess\helper.py", line 147, in init_alignment_model
model.load_state_dict(torch.load(model_path, map_location=device)['state_dict'], strict=True)
File "D:\liveportrait_talker\venv\lib\site-packages\torch\serialization.py", line 1040, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "D:\liveportrait_talker\venv\lib\site-packages\torch\serialization.py", line 1272, in _legacy_load
result = unpickler.load()
File "D:\liveportrait_talker\venv\lib\site-packages\torch\serialization.py", line 1205, in persistent_load
obj = restore_location(obj, location)
File "D:\liveportrait_talker\venv\lib\site-packages\torch\serialization.py", line 1313, in restore_location
return default_restore_location(storage, map_location)
File "D:\liveportrait_talker\venv\lib\site-packages\torch\serialization.py", line 393, in default_restore_location
raise RuntimeError("don't know how to restore data location of "
RuntimeError: don't know how to restore data location of torch.storage.UntypedStorage (tagged with gpu)
when set to - cpu - this error
(venv) D:\liveportrait_talker>python inference.py --config_path config.yaml --source_path "D:\liveportrait_talker\Face\Tarkan1080.jpg" --audio_path "D:\liveportrait_talker\Audio\MyMemory.wav" --save_path "D:\liveportrait_talker\Output" Config File is loaded succesfully! D:\liveportrait_talker\venv\lib\site-packages\torchvision\models_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( D:\liveportrait_talker\venv\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or
None
for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passingweights=None
. warnings.warn(msg) Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" to D:\liveportrait_talker\pretrained_models\sadtalker\detection_Resnet50_Final.pth100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 104M/104M [01:19<00:00, 1.37MB/s] Traceback (most recent call last): File "D:\liveportrait_talker\inference.py", line 78, in
main(args)
File "D:\liveportrait_talker\inference.py", line 23, in main
preprocess = Preprocess(device=cfg.device,
File "D:\liveportrait_talker\src\modules\preprocess.py", line 31, in init
self.load_3dmm_coeff_model(sadtalker_checkpoint_path=sadtalker_checkpoint_path)
File "D:\liveportrait_talker\src\modules\preprocess.py", line 34, in load_3dmm_coeff_model
checkpoint = safetensors.torch.load_file(sadtalker_checkpoint_path)
File "D:\liveportrait_talker\venv\lib\site-packages\safetensors\torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f: