soanagno / rba-pinns

Implementation of fast PINN optimization with RBA weights
MIT License
40 stars 3 forks source link

Optimizer of the RBA is missing #2

Closed afrah closed 3 months ago

afrah commented 4 months ago

Thank you for sharing the code, The code does not include the optimizer of the RBA method mentioned in the paper. A simple formula is used for adapting weight of the PDE residual:

r_norm = self.eta * torch.abs(self.r_pred) / torch.max(torch.abs(self.r_pred)) self.rsum = (self.rsum * self.gamma + r_norm).detach()

soanagno commented 3 months ago

I’m not entirely sure I understand your question. The paper doesn’t introduce a new optimizer but rather presents a weighting scheme for the collocation points based on the normalized residuals from previous iterations. For more details, please refer to Algorithm 1, which is implemented step-by-step within the PyTorch folder.