zhengkw18 / face-vid2vid

Unofficial implementation of the paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" (CVPR 2021 Oral)
153 stars 17 forks source link

Will you please share the trained checkpoint ? #1

Open DWCTOD opened 3 years ago

DWCTOD commented 3 years ago

大佬您好,感谢您开源的代码。 不知道后续是否会考虑分享一下预训练模型?谢谢啦

zhengkw18 commented 3 years ago

I uploaded the checkpoint to Google Drive: https://drive.google.com/file/d/1_xTqfk-cjouOEjZ3gkDyYwdq7mYeiAji/view?usp=sharing But notice, the model requires huge amount of training. My checkpoint is of limited training, 7 days on 4 2080Ti, so currently performs not good especially for changing viewpoint. As a comparison, the guy in Alibaba trains his implementation on 8 A100 for 5 days (10~20 times of training compared to me). But unlike his work, I rearrange the code structure to make it concise, and I use DistributedDataParallel instead of DataParallel, which may be faster. Sadly, the code may never get enough training again.

DWCTOD commented 3 years ago

I uploaded the checkpoint to Google Drive: https://drive.google.com/file/d/1_xTqfk-cjouOEjZ3gkDyYwdq7mYeiAji/view?usp=sharing But notice, the model requires huge amount of training. My checkpoint is of limited training, 7 days on 4 2080Ti, so currently performs not good especially for changing viewpoint. As a comparison, the guy in Alibaba trains his implementation on 8 A100 for 5 days (10~20 times of training compared to me). But unlike his work, I rearrange the code structure to make it concise, and I use DistributedDataParallel instead of DataParallel, which may be faster. Sadly, the code may never get enough training again.

谢谢大佬,我试一试哈,非常感谢,https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis 这个我也试了,不过用的数据量比较少(400个左右),训练效果不好,都完全不动

DWCTOD commented 3 years ago

大佬这里好像有点小bug, video_array = [img_as_float32(io.imread(os.path.join(args.driving, frames[idx]))) for idx in range(num_frames)] 这个应该是用来获取视频帧列表的吧,直接输入视频的话会报错

zhengkw18 commented 3 years ago

大佬这里好像有点小bug, video_array = [img_as_float32(io.imread(os.path.join(args.driving, frames[idx]))) for idx in range(num_frames)] 这个应该是用来获取视频帧列表的吧,直接输入视频的话会报错

是的,因为我使用的是fomm的数据处理脚本,默认的训练测试数据都是一系列png格式的帧,视频的话改代码或者先预转换为图片都行