zhengyuf / PointAvatar

Official Repository for CVPR 2023 paper PointAvatar: Deformable Point-based Head Avatars from Videos.
Other
318 stars 21 forks source link

Eyes open and close (eys blink) are not workable both in training and driving #23

Closed Edward20230101 closed 7 months ago

Edward20230101 commented 7 months ago

Hi, Great works!

I trained my video data and drived the model successfully. The effectiveness is quite good!

But, I find out that, the trained model can't have eyes blink,. Also, it can not have eyes blink when being drived. I would like to ask for your kindly help to solve this problem. BTW, the head in training video has eyes blink and the video is around 5 mins. Also, the driving video has eyes blink too.

Would appreciate your help at your earliest convenience.

Edward

Arthur-Evans commented 7 months ago

Hi, I am working on my final project. May I ask how you trained your own video. It seems that the training data is not only photos, but also mask, flame_params.json, etc. obtained from a series of preprocessing operations. The author's code does not give these preprocessing operations. So far I have trained the author's video, what should I do next? In other words, train.py trains the video, test.py draws the image and evaluates it. Let's say I want to make a system, how do I drive the model? I would appreciate if you could answer me!

Xzy765039540 commented 7 months ago

Hi, I am working on my final project. May I ask how you trained your own video. It seems that the training data is not only photos, but also mask, flame_params.json, etc. obtained from a series of preprocessing operations. The author's code does not give these preprocessing operations. So far I have trained the author's video, what should I do next? In other words, train.py trains the video, test.py draws the image and evaluates it. Let's say I want to make a system, how do I drive the model? I would appreciate if you could answer me!

The author explicitly says that the preprocessing comes from her previous project IMavatar. You may want to check this repohttps://github.com/zhengyuf/IMavatar/tree/main/preprocess

zhengyuf commented 7 months ago

Hi Edward,

Eye blinking is indeed difficult for PointAvatar. It's likely that the preprocessing step that estimates FLAME parameters based on DECA and facial landmarks already fails to capture eye blinking. I think the problem cannot be solved with a change of hyperparameters. Some new losses need to be designed to supervise the closing of eyes.

best, Yufeng

Arthur-Evans commented 7 months ago

Hi, I am working on my final project. May I ask how you trained your own video. It seems that the training data is not only photos, but also mask, flame_params.json, etc. obtained from a series of preprocessing operations. The author's code does not give these preprocessing operations. So far I have trained the author's video, what should I do next? In other words, train.py trains the video, test.py draws the image and evaluates it. Let's say I want to make a system, how do I drive the model? I would appreciate if you could answer me!

The author explicitly says that the preprocessing comes from her previous project IMavatar. You may want to check this repohttps://github.com/zhengyuf/IMavatar/tree/main/preprocess

Your answer is much appreciated. I am an undergraduate student with a limited level of fundamentals and can only reproduce the author's great work. Thank you for your understanding.

Edward20230101 commented 7 months ago

Hi Edward,

Eye blinking is indeed difficult for PointAvatar. It's likely that the preprocessing step that estimates FLAME parameters based on DECA and facial landmarks already fails to capture eye blinking. I think the problem cannot be solved with a change of hyperparameters. Some new losses need to be designed to supervise the closing of eyes.

best, Yufeng

Understood. Thanks very much! Wish you could have much greater works in the future.

Best, Edward