Open AJAXJR24 opened 11 months ago
While your question is about loss weights, not learning rate, dynamically updating them in TensorFlow is achievable.
The key is to create custom callbacks that adjust the weights during training. This requires a deep understanding of TensorFlow's internal workings, particularly the autograph
functionality.
If diving into code complexity isn't your preference, consider alternative PINN libraries like sciann
or modulus
that offer built-in dynamic loss weight functionalities.
Or you can refer to (this blog) to learn about constructing a PINN and the adaptive loss weights from the scratch
No this is not implemented in DeepXDE and to be honest, LRA (learning rate annealing) isn't very effectively on many problems. Anyways it is implemented in NVIDIA Modulus. Personally, I would say that just stick to constant coefficients and use deeper networks. Here is one of my paper where solved PDEs with discontinuous solutions without any adaptive cofficients: paper
Dear @lululxvi Hi and thanks for your help. 1) I want to know How can i implement the underlying LRA algorithm in deepxde. 2) How to define the equation number 15?