Open sun11711 opened 1 week ago
This means the weight for regularization_G_B (--lambda_reg) should be set larger since the regularization loss is not decreasing.
How well do the loss functions consistency_B and regularization_G_B converge? Do I fail this way? But look at the middle output image effect is OK
the regularization_G_B should gradually decrease and converge to a small value (but never zero), which corresponds to a sparse unmatchability mask. This will also affect the consistency_B. In your loss log, it's obvious that the unmatchability mask is too large so the matchability consistency (consistency_B) is almost 0 (you can refer to equation 15 in the paper). You can also visualize the predicted masks.
I ran the author's code with my own data set to find out why the loss function consistency_B is 0 and regularization_G_B is stable at 0.5. Is this normal?