TMElyralab / MuseTalk

MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting
Other
1.86k stars 224 forks source link

The weight of L1 #80

Closed chunyu-li closed 1 month ago

chunyu-li commented 1 month ago

The loss function is $L = \lambda L_1 + L_2$. Could you please tell me the value of $\lambda$?

alexLIUMinhao commented 1 month ago

Hi,

This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

chunyu-li commented 1 month ago

Hi,

This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

Thank you very much for your answer!

gobigrassland commented 1 month ago

Hi,

This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

github项目页图片,$\lambda$ 是和latent部分损失项相乘的。而train_codes代码部分则是反过来。是不是写错啦?

# Mask the top half of the image and calculate the loss only for the lower half of the image.
 image_pred_img = image_pred_img[:, :, image_pred_img.shape[2]//2:, :]
image = image[:, :, image.shape[2]//2:, :]    
loss_lip = F.l1_loss(image_pred_img.float(), image.float(), reduction="mean") # the loss of the decoded images
loss_latents = F.l1_loss(image_pred.float(), latents.float(), reduction="mean") # the loss of the latents
loss = 2.0*loss_lip + loss_latents # add some weight to balance the loss
chunyu-li commented 1 month ago

Hi, This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

github项目页图片,λ 是和latent部分损失项相乘的。而train_codes代码部分则是反过来。是不是写错啦?

# Mask the top half of the image and calculate the loss only for the lower half of the image.
 image_pred_img = image_pred_img[:, :, image_pred_img.shape[2]//2:, :]
image = image[:, :, image.shape[2]//2:, :]    
loss_lip = F.l1_loss(image_pred_img.float(), image.float(), reduction="mean") # the loss of the decoded images
loss_latents = F.l1_loss(image_pred.float(), latents.float(), reduction="mean") # the loss of the latents
loss = 2.0*loss_lip + loss_latents # add some weight to balance the loss

确实,我也感觉这里是作者写错了

czk32611 commented 1 month ago

Hi, This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

github项目页图片,$\lambda$ 是和latent部分损失项相乘的。而train_codes代码部分则是反过来。是不是写错啦?

# Mask the top half of the image and calculate the loss only for the lower half of the image.
 image_pred_img = image_pred_img[:, :, image_pred_img.shape[2]//2:, :]
image = image[:, :, image.shape[2]//2:, :]    
loss_lip = F.l1_loss(image_pred_img.float(), image.float(), reduction="mean") # the loss of the decoded images
loss_latents = F.l1_loss(image_pred.float(), latents.float(), reduction="mean") # the loss of the latents
loss = 2.0*loss_lip + loss_latents # add some weight to balance the loss

我确认了一下代码,图画错了,代码没错。。。pixel维度的loss权重是2