Open yancccc opened 6 days ago
自行提取epoch 160里面的 state_dict结构
自行提取epoch 160里面的 state_dict结构
提取epoch 160里面的 state_dict结构:是'net_g', 'net_d' (dhlive) E:\shuziren\DH_live-main>python Python 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:38:46) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
import torch checkpoint = torch.load("checkpoint/epoch_160.pth") :1: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. print(checkpoint['state_dict'].keys()) dict_keys(['net_g', 'net_d'])
提取net_g
提取net_g
好了,就是有点糊
你也用的单人视频吗
我用了几十个视频
提取net_g
好了,就是有点糊
麻烦问下具体怎么提取的呢
提取net_g
麻烦怎么提取转换成render.pth呢
转换后加载运行报错
python demo1.py /home/loong/Downloads/loong/loong1_bk video_data/audio0.wav 3.mp4
(256, 256, 3)
Video path is set to: /home/loong/Downloads/loong/loong1_bk
Audio path is set to: video_data/audio0.wav
output video name is set to: 3.mp4
/home/loong/miniconda3/envs/dh_live/lib/python3.12/site-packages/sklearn/base.py:376: InconsistentVersionWarning: Trying to unpickle estimator PCA from version 1.3.0 when using version 1.5.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
/ai/DH_live/talkingface/audio_model.py:46: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
self.__net.load_state_dict(torch.load(ckpt_path))
/ai/DH_live/talkingface/render_model.py:37: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(ckpt_path)
请检查当前视频, 错误帧数: 18
Traceback (most recent call last):
File "/ai/DH_live/demo1.py", line 63, in <module>
main()
File "/ai/DH_live/demo1.py", line 33, in main
renderModel.reset_charactor(video_path, pkl_path)
File "/ai/DH_live/talkingface/render_model.py", line 46, in reset_charactor
prepare_video_data(video_path, Path_pkl, ref_img_index_list)
File "/ai/DH_live/talkingface/run_utils.py", line 226, in prepare_video_data
ref_img = get_ref_images_fromVideo(cap_input, ref_img_index_list, pts_driven[:, :, :2])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ai/DH_live/talkingface/data/few_shot_dataset.py", line 78, in get_ref_images_fromVideo
ref_img = generate_ref(frame, ref_keypoints[index])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ai/DH_live/talkingface/data/few_shot_dataset.py", line 50, in generate_ref
crop_coords = crop_face(keypoints, size=img.shape[:2], is_train=is_train)
^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'shape'
单人四个视频训练,160轮运行效果,还是很差
提取net_g
提取net_g,是怎样操作的,用什么python命令
训练完成后,将训练得到的epoch_160.pth 替换 render.pth 后运行报错:
(dhlive) E:\shuziren\DH_live-main>python demo.py video_data/test video_data/audio0.wav out.mp4 (256, 256, 3) Video path is set to: video_data/test Audio path is set to: video_data/audio0.wav output video name is set to: out.mp4 E:\shuziren\DH_live-main\talkingface\audio_model.py:46: FutureWarning: You are using
main()
File "E:\shuziren\DH_live-main\demo.py", line 30, in main
renderModel.loadModel("checkpoint/epoch_160.pth")
File "E:\shuziren\DH_live-main\talkingface\render_model.py", line 38, in loadModel
self. net.load_state_dict(checkpoint)
File "E:\Anaconda3\envs\dhlive\lib\site-packages\torch\nn\modules\module.py", line 2215, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DINet_five_Ref:
Missing key(s) in state_dict: "source_in_conv.0.conv.weight", "source_in_conv.0.conv.bias", "source_in_conv.0.norm.weight", "source_in_conv.0.norm.bias", "source_in_conv.0.norm.running_mean", "source_in_conv.0.norm.running_var", "source_in_conv.1.conv.weight", "source_in_conv.1.conv.bias", "source_in_conv.1.norm.weight", "source_in_conv.1.norm.bias", "source_in_conv.1.norm.running_mean", "source_in_conv.1.norm.running_var", "source_in_conv.2.conv.weight", "source_in_conv.2.conv.bias", "source_in_conv.2.norm.weight", "source_in_conv.2.norm.bias", "source_in_conv.2.norm.running_mean", "source_in_conv.2.norm.running_var", "ref_in_conv.0.conv.weight", "ref_in_conv.0.conv.bias", "ref_in_conv.0.norm.weight", "ref_in_conv.0.norm.bias", "ref_in_conv.0.norm.running_mean", "ref_in_conv.0.norm.running_var", "ref_in_conv.1.conv.weight", "ref_in_conv.1.conv.bias", "ref_in_conv.1.norm.weight", "ref_in_conv.1.norm.bias", "ref_in_conv.1.norm.running_mean", "ref_in_conv.1.norm.running_var", "ref_in_conv.2.conv.weight", "ref_in_conv.2.conv.bias", "ref_in_conv.2.norm.weight", "ref_in_conv.2.norm.bias", "ref_in_conv.2.norm.running_mean", "ref_in_conv.2.norm.running_var", "trans_conv.0.conv.weight", "trans_conv.0.conv.bias", "trans_conv.0.norm.weight", "trans_conv.0.norm.bias", "trans_conv.0.norm.running_mean", "trans_conv.0.norm.running_var", "trans_conv.1.conv.weight", "trans_conv.1.conv.bias", "trans_conv.1.norm.weight", "trans_conv.1.norm.bias", "trans_conv.1.norm.running_mean", "trans_conv.1.norm.running_var", "trans_conv.2.conv.weight", "trans_conv.2.conv.bias", "trans_conv.2.norm.weight", "trans_conv.2.norm.bias", "trans_conv.2.norm.running_mean", "trans_conv.2.norm.running_var", "trans_conv.3.conv.weight", "trans_conv.3.conv.bias", "trans_conv.3.norm.weight", "trans_conv.3.norm.bias", "trans_conv.3.norm.running_mean", "trans_conv.3.norm.running_var", "trans_conv.4.conv.weight", "trans_conv.4.conv.bias", "trans_conv.4.norm.weight", "trans_conv.4.norm.bias", "trans_conv.4.norm.running_mean", "trans_conv.4.norm.running_var", "trans_conv.5.conv.weight", "trans_conv.5.conv.bias", "trans_conv.5.norm.weight", "trans_conv.5.norm.bias", "trans_conv.5.norm.running_mean", "trans_conv.5.norm.running_var", "trans_conv.6.conv.weight", "trans_conv.6.conv.bias", "trans_conv.6.norm.weight", "trans_conv.6.norm.bias", "trans_conv.6.norm.running_mean", "trans_conv.6.norm.running_var", "trans_conv.7.conv.weight", "trans_conv.7.conv.bias", "trans_conv.7.norm.weight", "trans_conv.7.norm.bias", "trans_conv.7.norm.running_mean", "trans_conv.7.norm.running_var", "trans_conv.8.conv.weight", "trans_conv.8.conv.bias", "trans_conv.8.norm.weight", "trans_conv.8.norm.bias", "trans_conv.8.norm.running_mean", "trans_conv.8.norm.running_var", "appearance_conv_list.0.0.conv1.weight", "appearance_conv_list.0.0.conv1.bias", "appearance_conv_list.0.0.conv2.weight", "appearance_conv_list.0.0.conv2.bias", "appearance_conv_list.0.0.norm1.weight", "appearance_conv_list.0.0.norm1.bias", "appearance_conv_list.0.0.norm1.running_mean", "appearance_conv_list.0.0.norm1.running_var", "appearance_conv_list.0.0.norm2.weight", "appearance_conv_list.0.0.norm2.bias", "appearance_conv_list.0.0.norm2.running_mean", "appearance_conv_list.0.0.norm2.running_var", "appearance_conv_list.0.1.conv1.weight", "appearance_conv_list.0.1.conv1.bias", "appearance_conv_list.0.1.conv2.weight", "appearance_conv_list.0.1.conv2.bias", "appearance_conv_list.0.1.norm1.weight", "appearance_conv_list.0.1.norm1.bias", "appearance_conv_list.0.1.norm1.running_mean", "appearance_conv_list.0.1.norm1.running_var", "appearance_conv_list.0.1.norm2.weight", "appearance_conv_list.0.1.norm2.bias", "appearance_conv_list.0.1.norm2.running_mean", "appearance_conv_list.0.1.norm2.running_var", "appearance_conv_list.1.0.conv1.weight", "appearance_conv_list.1.0.conv1.bias", "appearance_conv_list.1.0.conv2.weight", "appearance_conv_list.1.0.conv2.bias", "appearance_conv_list.1.0.norm1.weight", "appearance_conv_list.1.0.norm1.bias", "appearance_conv_list.1.0.norm1.running_mean", "appearance_conv_list.1.0.norm1.running_var", "appearance_conv_list.1.0.norm2.weight", "appearance_conv_list.1.0.norm2.bias", "appearance_conv_list.1.0.norm2.running_mean", "appearance_conv_list.1.0.norm2.running_var", "appearance_conv_list.1.1.conv1.weight", "appearance_conv_list.1.1.conv1.bias", "appearance_conv_list.1.1.conv2.weight", "appearance_conv_list.1.1.conv2.bias", "appearance_conv_list.1.1.norm1.weight", "appearance_conv_list.1.1.norm1.bias", "appearance_conv_list.1.1.norm1.running_mean", "appearance_conv_list.1.1.norm1.running_var", "appearance_conv_list.1.1.norm2.weight", "appearance_conv_list.1.1.norm2.bias", "appearance_conv_list.1.1.norm2.running_mean", "appearance_conv_list.1.1.norm2.running_var", "adaAT.commn_linear.0.weight", "adaAT.commn_linear.0.bias", "adaAT.scale.0.weight", "adaAT.scale.0.bias", "adaAT.rotation.0.weight", "adaAT.rotation.0.bias", "adaAT.translation.0.weight", "adaAT.translation.0.bias", "out_conv.0.conv.weight", "out_conv.0.conv.bias", "out_conv.0.norm.weight", "out_conv.0.norm.bias", "out_conv.0.norm.running_mean", "out_conv.0.norm.running_var", "out_conv.1.conv.weight", "out_conv.1.conv.bias", "out_conv.1.norm.weight", "out_conv.1.norm.bias", "out_conv.1.norm.running_mean", "out_conv.1.norm.running_var", "out_conv.2.conv1.weight", "out_conv.2.conv1.bias", "out_conv.2.conv2.weight", "out_conv.2.conv2.bias", "out_conv.2.norm1.weight", "out_conv.2.norm1.bias", "out_conv.2.norm1.running_mean", "out_conv.2.norm1.running_var", "out_conv.2.norm2.weight", "out_conv.2.norm2.bias", "out_conv.2.norm2.running_mean", "out_conv.2.norm2.running_var", "out_conv.3.conv.weight", "out_conv.3.conv.bias", "out_conv.3.norm.weight", "out_conv.3.norm.bias", "out_conv.3.norm.running_mean", "out_conv.3.norm.running_var", "out_conv.4.weight", "out_conv.4.bias".
Unexpected key(s) in state_dict: "epoch", "state_dict", "optimizer".
torch.load
withweights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value forweights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user viatorch.serialization.add_safe_globals
. We recommend you start settingweights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. self.net.load_state_dict(torch.load(ckpt_path)) E:\shuziren\DH_live-main\talkingface\render_model.py:37: FutureWarning: You are usingtorch.load
withweights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value forweights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user viatorch.serialization.add_safe_globals
. We recommend you start settingweights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. checkpoint = torch.load(ckpt_path) Traceback (most recent call last): File "E:\shuziren\DH_live-main\demo.py", line 63, in