Closed minkyujeon closed 5 years ago
I agree, I've tried to use MarginRankingLoss but the discriminator is still too weak.
You're correct, I am not using hinge loss. That's simply because I never got around to fixing this. PyTorch doesn't provide a hinge loss function besides torch.nn.HingeEmbeddingLoss
, which doesn't fit this use case, and I had some trouble using the max function for tensors. In order to get the script running I simply removed the max. But this is not the way the loss function is supposed to look!
I used Hingeloss, and verified the loss whether it is appropriate in my task. If you dont mind, Could you leave the comment for my task. It is confusing to me if I use D loss value, and whether it is changed with negative value or not.
Yeah it's fine. I saw your pull request and I will merge it.
In loss.py, class LossD(nn.Module): def init(self, device): super(LossD, self).init() self.dtype = torch.FloatTensor if device == 'cpu' else torch.cuda.FloatTensor self.zero = torch.zeros([1]).type(self.dtype)
-> In paper, LossD is hingeLoss (L_DSC) , but I think it's not hinge loss.
Did I miss something?