Closed wenhao728 closed 1 year ago
Thanks for your great work in advance.
I have several questions about the training code and implementation of VGG loss and L1 loss.
- From these lines of codes, it seems to use both L1 loss and VGG loss at the same time but I have not found implementations about L1 loss in the paper until now (26/09/2023). May I confirm if the L1 loss is an ad-hoc/experimental setting or it has been proven to induce some promising improvement.
- How did you decide the weights of each VGG layer.
- And also how did you balance the three loss items:
- L2 of latent noise.
- L1 of pixels.
- VGG of pixels.
Will be appreciative if you are willing to share more insights here 🤗
@wenhao728 Does L1 loss work in your test?
- As you said, l1 loss is only an experimental setting, and the version in the paper does not have this loss.
- For the weight of each layer of features in VGG loss, we mainly referred to the implementation of some previous work and did not make too many modifications.
- For the setting of weights between losses, we will first roughly balance different losses to the same order of magnitude, and then fine-tune these weights through cross-validation.
@Limbor Thanks for your great work! Does the pretrained model use the loss setting in the paper or the loss setting in the code? Thanks!
Thanks for your great work in advance. I have several questions about the training code and implementation of VGG loss and L1 loss.
- From these lines of codes, it seems to use both L1 loss and VGG loss at the same time but I have not found implementations about L1 loss in the paper until now (26/09/2023). May I confirm if the L1 loss is an ad-hoc/experimental setting or it has been proven to induce some promising improvement.
- How did you decide the weights of each VGG layer.
- And also how did you balance the three loss items:
- L2 of latent noise.
- L1 of pixels.
- VGG of pixels.
Will be appreciative if you are willing to share more insights here 🤗
@wenhao728 Does L1 loss work in your test?
In my experiments with VITON-HD at a resolution of 512x384, utilizing a total batch size of 16 and approximately 50k total training steps, the impact of the L1 loss and VGG loss are negligible.
Thanks for your great work in advance. I have several questions about the training code and implementation of VGG loss and L1 loss.
- From these lines of codes, it seems to use both L1 loss and VGG loss at the same time but I have not found implementations about L1 loss in the paper until now (26/09/2023). May I confirm if the L1 loss is an ad-hoc/experimental setting or it has been proven to induce some promising improvement.
- How did you decide the weights of each VGG layer.
- And also how did you balance the three loss items:
- L2 of latent noise.
- L1 of pixels.
- VGG of pixels.
Will be appreciative if you are willing to share more insights here 🤗
@wenhao728 Does L1 loss work in your test?
In my experiments with VITON-HD at a resolution of 512x384, utilizing a total batch size of 16 and approximately 50k total training steps, the impact of the L1 loss and VGG loss are negligible.
Thanks for the information. I tried many times and got no clear conclusion.
Thanks for your great work in advance.
I have several questions about the training code and implementation of VGG loss and L1 loss.
https://github.com/bcmi/DCI-VTON-Virtual-Try-On/blob/107c2d393da182c8e2430bfcb7190d688f6f286d/ldm/models/diffusion/ddpm.py#L1697-L1706
https://github.com/bcmi/DCI-VTON-Virtual-Try-On/blob/107c2d393da182c8e2430bfcb7190d688f6f286d/ldm/models/diffusion/ddpm.py#L1709-L1716
Will be appreciative if you are willing to share more insights here 🤗