Open Liming-belief opened 1 month ago
are Percep, Fake, and Real always 0.0?
There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。 @see2run
There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。 @see2run
Did you not change the script to train SyncNet and Wav2Lip, or make any modifications?
There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。 @see2run
Did you not change the script to train SyncNet and Wav2Lip, or make any modifications?
Train using train_syncnet_sam.py and hq_wav2lip_sam_train.py without making any changes to the code
@Liming-belief While training syncnet, did you face an issue where loss gets stuck? I am stuck there, so I need help.
There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。 The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。
I have a question, at the beginning, are the values for sync, dw, percep, fake, and real all 0.0 like this or not?
Step 683 | L1: 0.09317 | Vgg: 0.3026 | SW: 0.03 | Sync: 0.0 | DW: 0.0 | Percep: 0.0 | Fake: 0.0, Real: 0.0 | Load: 0.008834, Train: 1.229
or have their values changed from the beginning?
Hello, I trained Syncnet and wav2lip to reduce the loss to between 0.25-0.3, but after actual inference, I found that the lip shape of the character is not moving. May I ask what is the reason for this?