SciML / NeuralPDE.jl

Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation
https://docs.sciml.ai/NeuralPDE/stable/
Other
931 stars 198 forks source link

Allow users to write specialized data-free loss functions and use NeuralPDE training strategies #703

Open nicholaskl97 opened 1 year ago

nicholaskl97 commented 1 year ago

NeuralPDE develops its loss function in two steps:

  1. Develop a data-free loss function $\ell(x, \theta)$ from each equation/boundary condition in the PDESystem. This is the role of NeuralPDE's parser.
  2. Develop a full loss function $L(\theta)$ from the data-free loss functions, the domains, and the training strategy. This involves developing a training set from the strategy and domains, then merging it with the data free loss functions.

In some cases, a user may want to use 2 with their own data-free loss functions, specialized to their PDE. For example, as noted in #702, directional derivatives aren't currently optimized, so a current user may wish to write their own specialized data-free loss function, which makes this optimization, without rewriting the training strategies provided by NeuralPDE.

Adding this functionality could possibly look like exporting merge_strategy_with_loss_function, however that function relies on having a PINNRepresentation that was built up in step 1, which might not be the most user-friendly for someone doing step 1 themselves. Additionally, we'd want to make sure that function is well-documented and perhaps even add a demo of this functionality if we want people to be aware of it as an option.

@xtalax

nicholaskl97 commented 4 months ago

@xtalax, I just came up against this again and am wondering if there are any plans (possibly with your parser re-write) to add a public interface for merge_strategy_with_loss_function or similar.

For more context:

I'm working on SciML/NeuralLyapunov.jl , and my PDE is always something like $\vec{\nabla} V(\vec{x}) \cdot \vec{f}(\vec{x}) < 0$, where $\vec{f}$ is user-defined and we're searching for $V$. My data-free loss is then always $\ell(\vec{x}, \theta) = \max \left( 0, \vec{\nabla} V_\theta(\vec{x}) \cdot \vec{f}(\vec{x}) \right)^2$, just with a different $\vec{f}$ each time.

In some functionality that I'm currently adding to NeuralLyapunov, I'm hoping to only enforce that PDE in the region $\{ \vec{x} : V(\vec{x}) \le \rho \}$. It would be nice to be able to have an if ... else ... statement to make the loss as above when $V(\vec{x}) \le \rho$ and $0$ when $V(\vec{x}) > \rho$, but the NeuralPDE parser doesn't like if ... else ....

I believe IfElse.ifelse provides a workaround, which I will likely use, but I haven't yet gotten it to work, I think because the dot operator gets applied to it in a weird way by the parser. Even if I get it to work, it'll be somewhat inconvenient, especially for users of my library who might want to define their own version of the above conditions. For example, it would be reasonable to try, instead of $0$ in the "else" case, having a conditional there that depends now on the sign of $\vec{\nabla} V_\theta(\vec{x}) \cdot \vec{f}(\vec{x})$ and currently anyone wanting to do that will also have to know about IfElse.ifelse.

As I described in the first post of this issue, I would very much like to leverage the training strategies in NeuralPDE and not re-implement them myself, but the parser is not just unnecessary for my application (since I know my PDE ahead of time), but actually unhelpful (since now the user-defined $\vec{f}$ has to be traceable by Symbolics and I don't have as much freedom to optimize the data-free loss function to my specific PDE, such as with the directional derivative issue in #702).

I could also imagine someone who wanted to use the parser, then do their own special training strategy, but I don't have need for that personally yet.

ChrisRackauckas commented 4 months ago

I think that would be interesting to have.