Closed liujiyuan13 closed 5 years ago
The last term in the loss function is commonly referred to as weight decay and it is considered in another place. Its strength (lambda) is set through the parameters --weight_decay
(and for pretraining --ae_weight_decay
) which are passed to the optimizer.
The last term in the loss function is commonly referred to as weight decay and it is considered in another place. Its strength (lambda) is set through the parameters
--weight_decay
(and for pretraining--ae_weight_decay
) which are passed to the optimizer.
Thanks very much for your reply! I've found the corresponding code for the weights of neural network.
As @rsaite pointed out, PyTorch makes it simple to add weight decay regularization on the network weights via the optimizer
. Just to add the specific lines for reference:
Deep SVDD trainer https://github.com/lukasruff/Deep-SVDD-PyTorch/blob/5d7195dfff37efdaccd337257484d9d3db464730/src/optim/deepSVDD_trainer.py#L49
Autoencoder trainer for pretraining https://github.com/lukasruff/Deep-SVDD-PyTorch/blob/5d7195dfff37efdaccd337257484d9d3db464730/src/optim/ae_trainer.py#L30
In paper, "Deep One-Class Classification", the loss function goes as However, the loss is calculated as They are different in these two places. The loss in this code lacks of the weights of neural network. Is it important or considered in another place?