Closed athtareq closed 2 years ago
The loss weights are helpful is many cases. For example, NVIDIA Modulus uses a loss weight of 0.5 for the PDE loss for all problems.
and as far as I can deduce it just multiplies the loss with the specified number ?
That is true.
@praksharma thanks for the reply ! So when can I use it for example ?
neural networks are black boxes. Nobody can give you definitive answers. Most probably you should use adaptive weights techniques for imbalanced loss terms.
@praksharma duly noted, thank you very much.
@praksharma Does DEEPXDE have any examples to implement adaptive weights?Thanks!
I don't think so. I think adaptive weights is not very useful. It tries to minimise the training loss, thus minimising the weights perturbation (dJ/dw) in each iteration/epoch. I have seen PINNs converging early with adaptive loss while the actual solution wasn't obtained. This mostly happens with forward problems with discontinuous boundary conditions and in ill-posed inverse problems.
You may try some algorithms for adaptive loss, such as learning rate annealing and neural tangent kernel PINN. Simply implement the function using supported callbacks in deepxde. @lululxvi may give you better details.
@praksharma I agree with your comments. That is why we haven't implemented it in DeepXDE yet.
Hello, I've been seeing loss_weights used in quite a lot of codes and examples, and as far as I can deduce it just multiplies the loss with the specified number ? Obviously I'm wrong but can anyone give me further information ? Thanks !