DSE-MSU / DeepRobust

A pytorch adversarial library for attack and defense methods on images and graphs
MIT License
994 stars 192 forks source link

Loss function #35

Closed liushunyu closed 4 years ago

liushunyu commented 4 years ago

I find that you use nll_loss as the loss function in PGDtraining, why you don't use cross_entropy?

In ResNet, you use the cross_entropy to train the model

liushunyu commented 4 years ago

Whether the same effect can be achieved with these two loss functions?

YaxinLi0-0 commented 4 years ago

You can refer to this link to see the difference of torch.nn.NLLLoss and torch.nn.CrossEntropyLoss : https://discuss.pytorch.org/t/difference-between-cross-entropy-loss-or-log-likelihood-loss/38816

YaxinLi0-0 commented 4 years ago

also: https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

liushunyu commented 4 years ago

So I think that you should use cross_entropy as the loss function in PGDtraining?

Because there is not log_softmax in your resnet, vgg, densenet and so on.

YaxinLi0-0 commented 4 years ago

Yes, you are right:) There is a mismatch between the CNN model and other models. I would use CrossEntropyLoss instead of nll_loss and update CNN model to make it unified. Thank you for reporting this issue.