KwaiVGI / LivePortrait

Bring portraits to life!
https://liveportrait.github.io
Other
12.39k stars 1.31k forks source link

Retargetting parameters ignored #148

Open samsonsite1 opened 3 months ago

samsonsite1 commented 3 months ago

Hello, I'm new to LivePortrait.

I'm running the app with: python app.py

locally in my web browser, on Windows. I'm trying to generate a video with retargetting values set.

I tried changing both target eyes-open ratio and target lip-open ratio to 0.8 (both initially set to zero), then clicked on the Animate button, however both parameters are ignored.

In argument_config.py, I tried changing these values to True:

flag_eye_retargeting: bool = True # not recommend to be True, WIP
flag_lip_retargeting: bool = True # not recommend to be True, WIP

But, those parameters are still ignored when regenerating the video.

I don't see any checkboxes or switches to enable these parameters. I just see the two Retargetting sliders.

Are there any instructions for getting the Retargetting parameters to work for video in a browser?

Also, how does one pass a retargetting value of 0.8 to inference.py? I don't see any commandline parameters that accept such a value.

usage: inference.py [-h] [OPTIONS]

| --flag-eye-retargeting | | not recommend to be True, WIP (default: False) | | --flag-lip-retargeting | | not recommend to be True, WIP (default: False) |

Retargetting parameters for those floating-point values seem to be missing from inference.py.

Thanks!

zzzweakman commented 3 months ago

Hi, I think you may have modified the values in inference_config.py. However, the final parameters are actually controlled by argument_config.py. You should update the values in argument_config.py to see the desired changes. @samsonsite1

samsonsite1 commented 3 months ago

thanks for the reply.

Sorry, I don't see any floating-point variables that accept 0.8 in argument_config.py or inference_config.py. Could you please be more specific? Thanks.

FurkanGozukara commented 3 months ago

well to make it work i had to add new functionality

i don't know why authors didnt make it work as expected

you can see in my tutorial : https://youtu.be/FPtpNrmuwXk

samsonsite1 commented 3 months ago

Thanks for the reply, but I would appreciate a solution posted here if it's no trouble.

I would manually edit the script myself if I knew which parameter to edit, as the GUI app doesn't retarget the video. It only retargets a single image.

samsonsite1 commented 3 months ago

Thanks, I found your LivePortrait project here: https://github.com/FurkanGozukara/LivePortrait/tree/main

Retargetting values are now being used when generating the video, but it seems to suffer from the same issue as the original project. The eyes and lips don't animate as expected when retargetting is enabled.

FurkanGozukara commented 3 months ago

@samsonsite1

i have shared how to make it work on the installer scripts page. there is a way to keep it working. i learnt it after videos published so not mentioned in the video sadly. but hopefully in next video i will show

samsonsite1 commented 3 months ago

It would be nice to control the eyes and lips ratios like keyframes per video frame, something like ComfyUI can do, but I'm not trying to be that creative. I just need to apply a constant ratio value for all video frames.

FurkanGozukara commented 2 months ago

I Upgraded to V3 and have video to video and image to video and Target Eye / Lip Open Ratio

image

===========

screencapture-127-0-0-1-7860-2024-07-21-17_45_51

somenewaccountthen commented 2 months ago

So what i set under retargetting is used as parameters during the creation of the animation? Because i thought it was a seperate tool to change an image.

FurkanGozukara commented 2 months ago

So what i set under retargetting is used as parameters during the creation of the animation? Because i thought it was a seperate tool to change an image.

it is not it can be used

i am able to use

Mystery099 commented 2 months ago

Thanks for your feedback. @samsonsite1

If you are using app.py and you want to apply flag_eye_retargeting or flag_lip_retargeting to the 🖼️ Source Image or the 🎞️ Source Video, you can set both options to True in src/config/argument_config.py. Or you can use :

python app.py --flag_eye_retargeting --flag_lip_retargeting

In this way, the eyes-open ratio and lip-open ratio of each driving frame will be transferred to the source image or the corresponding source frame. That is, the eyes and lips of the face in the animated image or frame will be open to the same extent as in the corresponding driving frame.

In the Retargeting area of ​​the Gradio interface, since there is no driving frame, we provide the target eyes-open ratio and target lip-open ratio to control the eyes-open and lip-open extents of the uploaded image in Retargeting Input. These two ratios will not affect other inputs, such as 🖼️ Source Image or 🎞️ Source Video.

For inference.py and areas other than the Retargeting area of ​​the Gradio interface, since there is a driving video, when flag_eye_retargeting or flag_lip_retargeting is turned on, the target eyes-open ratio and target lip-open ratio are provided by the driving frame. So we do not add these two options.