mathLab / PINA

Physics-Informed Neural networks for Advanced modeling
https://mathlab.github.io/PINA/
MIT License
391 stars 65 forks source link

Adding Regularization to the PINN Loss function #102

Closed yorkiva closed 1 year ago

yorkiva commented 1 year ago

Is your feature request related to a problem? Please describe. The condition module is the only place where you can define your physics-informed loss functions. However, there seems to be no way of adding additional regularizers like a L1 or L2 regularizer on the model's weights. Such regularization can be very useful to constrain overfitting, especially if training with noisy IC.

Describe the solution you'd like It would be good to have an extension of the condition module (or something similar) to allow regularization.

Describe alternatives you've considered

Additional context https://github.com/openjournals/joss-reviews/issues/5352

dario-coscia commented 1 year ago

👋🏻 @yorkiva Thank you for your comment. The Condition class is used to define the condition to apply (e.g. function) and where to apply it (e.g. location). The L2 regulariser on the weights can be accomplished by setting the regularizer parameter in the PINN class.

Eventually, I agree with you that a class Loss should be implemented, to enable the user to be more flexible. We will release soon a beta version of the software where we plan to introduce these features, but for mantainability we can not merge these features on the current version.

dario-coscia commented 1 year ago

Hello @yorkiva :)

Just wanted to let you know that in the beta version we will soon release the possibility to add a custom loss for the PINN (https://github.com/mathLab/PINA/pull/105), and other very cool features such as gradient clipping, batch gradient accumulation, ... since we will use lightining Trainer module in backhand to train the PINN .

Thank you for the very useful feedbacks 😄

danielskatz commented 1 year ago

Has this been resolved?

yorkiva commented 1 year ago

The regularizer in the PINN class only implements the L2 regularization. It should also be extended to incorporate L1 regularization. I have had some experience with comparing these regularizations and I have found that for some problems L1 regularization can be quite useful when the training data (i.e. boundary/initial condition) is noisy. Since the authors mention that a general functionality for regularization would be added in the upcoming release, this issue can be closed for now.