ToughStoneX / Self-Supervised-MVS

Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"
152 stars 15 forks source link

Loss weights and resultant curve issue #23

Open TWang1017 opened 1 year ago

TWang1017 commented 1 year ago
    Hi, thanks a lot for your swift response and your reminder helps a lot.

One more thing, I train on the DTU dataset with augmentation and co-seg deactivated. The training loss looks like below, the SSIM loss dominates the standard unsupervised loss based on the default weight [12xself.reconstr_loss (photo_loss) + 6xself.ssim_loss + 0.05xself.smooth_loss]. In this case, is it sensible to change the weight, like reduce the 6xself.ssim_loss to 1xself.ssim_loss such that it is in the similar range with reconstr_loss?

Also, the training seems not steady, it fluctuates a lot. Any clues why this happens? Thanks in advance for your help.

image

Originally posted by @TWang1017 in https://github.com/ToughStoneX/Self-Supervised-MVS/issues/22#issuecomment-1339018531

ToughStoneX commented 1 year ago

The fluctuation of these losses may be due to the inherent noise of the images. It is noted that in the photometric consistency loss all valid regions which can be projected from source view to reference view are used to calculate the self-supervision loss. Think about the loss again what regions are included in the self-supervision loss. Just use DTU for example. White/Black background? Occluded regions? Reflection? And etc. These regions do not have valid correspondence but they are calculated in the self-supervision loss, and all valid regions are used to compute the loss. The reason is that we calculate the self-supervision loss agnostic to these invalid cases, but these cases occurs in DTU dataset sometimes, disturbing the training process.

------------------ 原始邮件 ------------------ 发件人: "ToughStoneX/Self-Supervised-MVS" @.>; 发送时间: 2022年12月7日(星期三) 下午4:53 @.>; @.***>; 主题: [ToughStoneX/Self-Supervised-MVS] Loss weights and resultant curve issue (Issue #23)

 Hi, thanks a lot for your swift response and your reminder helps a lot.  

One more thing, I train on the DTU dataset with augmentation and co-seg deactivated. The training loss looks like below, the SSIM loss dominates the standard unsupervised loss based on the default weight [12xself.reconstr_loss (photo_loss) + 6xself.ssim_loss + 0.05xself.smooth_loss]. In this case, is it sensible to change the weight, like reduce the 6xself.ssim_loss to 1xself.ssim_loss such that it is in the similar range with reconstr_loss?

Also, the training seems not steady, it fluctuates a lot. Any clues why this happens? Thanks in advance for your help.

Originally posted by @TWang1017 in #22 (comment)

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

TWang1017 commented 1 year ago

Hi, thanks for your explanation. So that is the challenge occurred in the photometric loss, especially the MVS, the illumination changes and the occlusions. Also, the reconst_loss and SSIM loss seems not in the same range. Would that be beneficial to tweak the default loss weight? I replaced the backbone so I guess it is better not to stick with the default weights that is designed for CVPMVSNET. Thanks