SciML / NeuralPDE.jl

Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation
https://docs.sciml.ai/NeuralPDE/stable/
Other
970 stars 195 forks source link

Why the default derivative method is a numerical derivative? #427

Closed mhmodzoka closed 2 years ago

mhmodzoka commented 2 years ago

https://github.com/SciML/NeuralPDE.jl/blob/517eb0e986160d07c0a9eeef1df5d979e5da081c/src/pinns_pde_solve.jl#L109

Would you please show examples where autodifferentiation is used here?

ChrisRackauckas commented 2 years ago

It autodiffs over that.

ChrisRackauckas commented 2 years ago

It's described here:

https://github.com/SciML/NeuralPDE.jl/issues/150#issuecomment-699569334

Basically, a mixture of numerical and autodiff is actually asymptotically the most efficient if you work it out (Griewank's book has a nice proof that double reverse mode is never a good idea IIRC), so reverse mode over numerical is by far the most efficient approach. Now, it can hit some numerical issues if you're trying to converge to a very high accuracy, but we have noticed that even with embedding reverse-over-forward (which is more optimal than the reverse-over-reverse-over-reverse kind of thing we see other packages do, again, just do the proof yourself or see Griewank) you don't hit a much higher accuracy (PINNs seem to flatline at like 1e-3 or 1e-4 in any real case), so at that point, why not do numerical if it's faster at higher order?

I'll clarify this with examples in docs rather soon.