Hi there, I'm now reproducing the training code with cross-dock dataset but the training loss is maintained at ~1200 after 50 epochs.
I dived into the code and found that in ./models/epsnet/MDM_pocket_coor_shared.py, the F.mse_loss(pos_eq_global + pos_eq_local, target_pos_global + target_pos_local, reduction='none') means the loss between the output of self.net and the input with noise added, but IMP, it maybe the output of self.net and the original input.
Any comments from developers?
Hi there, I'm now reproducing the training code with cross-dock dataset but the training loss is maintained at ~1200 after 50 epochs. I dived into the code and found that in ./models/epsnet/MDM_pocket_coor_shared.py, the F.mse_loss(pos_eq_global + pos_eq_local, target_pos_global + target_pos_local, reduction='none') means the loss between the output of self.net and the input with noise added, but IMP, it maybe the output of self.net and the original input. Any comments from developers?