Layne-Huang / PMDM

74 stars 17 forks source link

Training loss does not decrease #14

Closed Xinheng-He closed 2 months ago

Xinheng-He commented 2 months ago

Hi there, I'm now reproducing the training code with cross-dock dataset but the training loss is maintained at ~1200 after 50 epochs. I dived into the code and found that in ./models/epsnet/MDM_pocket_coor_shared.py, the F.mse_loss(pos_eq_global + pos_eq_local, target_pos_global + target_pos_local, reduction='none') means the loss between the output of self.net and the input with noise added, but IMP, it maybe the output of self.net and the original input. Any comments from developers?