Rudrabha / Wav2Lip

This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
https://synclabs.so
10.3k stars 2.21k forks source link

Mouth Position Delay / Lag when motion is faster #46

Closed AlonDan closed 4 years ago

AlonDan commented 4 years ago

My first few tries where on some random videos and I noticed that the Mouth not always placed on the correct spot. So I did few experiments trying to replicate the issue before I post it here.

I've downloaded a Head made in Gan2 (random) and made a simple motion with different speeds as a video, so it's supposed to be easier to track compare to a video with angles and what not since it's just a still image moving around in different speeds so you can SEE the visual issue.

ISSUE Description:

  1. It seems like when there is a fast motion the mouth will be "behind" the frame like a delay.
  2. I don't think it's a track detection issue because it looks like a frame ahead the "position" of the mouth is almost there.

The above are only guess, I only have some experience with other Deep Fake such as: FaceSwap and DFL so I know that even blurry fast motion could be detected, I just don't know how it works here, probably in a different way but I hope that there is a solution to make even FAST motions trackable to sit on the right position without a lag / delay.

I hope to see this fixed if possible it will be great, please keep up the good work! :)

To keep things clear and simple, the example video is 640x360 - 24fps

Download Video Example Attached as .ZIP file:

result_video.zip

Few Screenshots from the video, shows the mouth not accurate on where it should be:

1a 1b 1c 2a 2b 2c

prajwalkr commented 4 years ago

Hello, thank you for the clear and detailed description of the issue. I think I clearly understand the issue. It is caused by this line: https://github.com/Rudrabha/Wav2Lip/blob/0600d0f4da5ce75865725249fdcb88b6dc2d61de/inference.py#L97

This line smooths bounding boxes over a 5 frame window. This is desirable as face detection can be at times be noisy and change a lot across frames. But, when there is rapid motion, the face position changes so much that the averaging the box location across the frames is undesirable.

We will look into this issue and update you once there is some sort of fix.

prajwalkr commented 4 years ago

Can you pull the latest commit, and try with the --nosmooth argument and let me know?

AlonDan commented 4 years ago

Just to be clear: I'm a newbie when it comes to GitHub and I'm not a programmer so everything is very new to me, all I can do is to follow instructions and tutorials. So I hope that I did it correct:

I've downloaded "inference.py" made a backup of the old one, and put this on the main master directory.

Then I used the same files I did for the test of this issue and at the end I've added --nosmooth like this: python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face MyDir/video.avi --audio MyDir/audio.wav --outfile MyDir/result_video.mp4 --nosmooth

but I get this error: inference.py: error: unrecognized arguments: --nosmooth

I double checked that I put the NEWEST file (from about 30 minutes ago) instead of the old one, but still it seems like it's not recognizing the new command.

Maybe I did something wrong, not sure if I'm the right man to help testing but I can try if it's something simple enough for me to follow, sorry for the mess.

dipam7 commented 4 years ago

Hey, I had a similar issue. I ran it with the --nosmooth argument and it ran till a 100% but then gave me the following error:

Traceback (most recent call last):
  File "inference.py", line 280, in <module>
    main()
  File "inference.py", line 250, in main
    total=int(np.ceil(float(len(mel_chunks))/batch_size)))):
  File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1127, in __iter__
    for obj in iterable:
  File "inference.py", line 111, in datagen
    face_det_results = face_detect(frames) # BGR2RGB for CNN face detection
  File "inference.py", line 101, in face_detect
    results = [[image[y1: y2, x1:x2], (y1, y2, x1, x2)] for image, (x1, y1, x2, y2) in zip(images, boxes)]
UnboundLocalError: local variable 'boxes' referenced before assignment

I see that line 101 uses boxes https://github.com/Rudrabha/Wav2Lip/blob/8549b79e14b3fe48e750da5d8635894723babd10/inference.py#L101

which we only retrieve if line 100 is true otherwise we don't set it to anything. https://github.com/Rudrabha/Wav2Lip/blob/8549b79e14b3fe48e750da5d8635894723babd10/inference.py#L100

prajwalkr commented 4 years ago

@AlonDan can you check now if it is better?

AlonDan commented 4 years ago

Sure, I just tried it now it won't start running there is an error right after I press ENTER:

File "inference.py", line 6
    <!DOCTYPE html>
    ^
SyntaxError: invalid syntax

Even if I just run the main command: python inference.py

I get the same error.

prajwalkr commented 4 years ago

Hello, I think this is an error from your end, as you can see that the repo does not have that in line 6: https://github.com/Rudrabha/Wav2Lip/blob/11a8eac1aa03c27596961b7218845f0028035bc7/inference.py#L6

AlonDan commented 4 years ago

Hmm, if that's the case I'm not sure how to fix that as I explained unfortunatly I'm a newbie in this.

f I switch back to the backup (old file version before the issue) everything works, but of course without the --nosmooth

Any suggestions to fix that issue on my side so I can test the new file version? Thanks ahead.

prajwalkr commented 4 years ago

Any suggestions to fix that issue on my side so I can test the new file version?

download the latest version once more. or the whole repo once more.

AlonDan commented 4 years ago

GOOD NEWS! I just downloaded the latest file version again and did a test again, from the SAME video as the original issue, now it's STEADY as it should be, probably need to test on more complicated video situations but from the current one it's IMPRESSIVE!

Great job, thank you for the super quick fix! here is the current version in case anyone would like to see the difference with --nosmooth The mouth now following every single frame!

DOWNLOAD same video using --nosmooth as (zip file)

result_video (nosmooth enabled).zip

Suggestion: Please consider to mention the use of --nosmooththis on the main GitHub in case people would like to use it.

Same Frame with --nosmooth and without: 1a 1b

prajwalkr commented 4 years ago

Please consider to mention the use of --nosmooththis on the main GitHub in case people would like to use it.

Will do it right away!

dipam7 commented 4 years ago

I'm seeing weird artifacts near the lips for some videos even after using --nosmooth. Is this because I am using high res videos?

prajwalkr commented 4 years ago

It’s hard to say without seeing the video. But, yes, high res videos and faces can produce artefacts.