Open VMinB12 opened 1 month ago
From DIS "intermediate supervision plays a typical role of regularizer for reducing the probability of over-fitting"
I see the motivation for adding intermediate supervision. But surely it is not working as intended based on the loss curves we show. Is this really the intended behaviour?
We are retraining MVANet on the ImageMatte dataset and are observing some undesired behaviour in the loss. We made a validation split and are individually logging the following components of the loss:
We are observing that the
final_loss
, which is the most important loss is decreasing nicely as expected:However, the
Loss_loc
,Loss_glb
andLoss_map
are not converging during training:We are interested in any insights into why this could happen and how we could improve the situation. Furthermore, did you keep track of these losses during pretraining of the original MVANet and did you observe convergence in this case?
For context, we used the script
train.py
provided in the repo without modification of the settings except for changing the dataset.