NOAA-PSL / model_error_correction_with_ai

0 stars 2 forks source link

use custom loss with respect to initial weights #18

Closed frolovsa closed 1 year ago

frolovsa commented 1 year ago

currently the weight decay is as following

loss=||y-f(x)||+lamda*||w - 0||

for sequential training, we want to penalize departure of weights from initial value of weights w0 as following:

loss=||y-f(x)||+lamda*||w - w0||

this can be implemented in a training step by modifying the loss computation. Here are a handful of examples

https://stackoverflow.com/questions/65998695/how-to-add-a-l1-or-l2-regularization-to-weights-in-pytorch

https://discuss.pytorch.org/t/how-to-implement-custom-regularization-losses-on-the-weights/2646/3

https://discuss.pytorch.org/t/l1-regularization-for-a-single-matrix/28088