Weizhi-Zhong / IP_LAP

CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
Apache License 2.0
637 stars 72 forks source link

The face cannot be detected when I infer my own input video. #31

Open AndrewDing616 opened 11 months ago

AndrewDing616 commented 11 months ago

Thank for your great work! But when I run the inference_single.py using my own video, the code cannot work. It show:

Traceback (most recent call last): File "inference_single.py", line 509, in <module> full = merge_face_contour_only(original_background, T_input_frame[2], T_ori_face_coordinates[2][1],fa) #(H,W,3) File "inference_single.py", line 145, in merge_face_contour_only preds = fa.get_landmarks(input_img)[0] # 68x2 TypeError: 'NoneType' object is not subscriptable

So, I wonder how to solve the problem, tks again!

yaserabdelaziz commented 11 months ago

I think having a more zoomed-in video on the face might work. IP LAP uses MediaPipe's face landmark detection and this requires the face to be zoomed-in to work.

AndrewDing616 commented 10 months ago

Thanks for your reply!