Closed mhmodzoka closed 2 years ago
It autodiffs over that.
It's described here:
https://github.com/SciML/NeuralPDE.jl/issues/150#issuecomment-699569334
Basically, a mixture of numerical and autodiff is actually asymptotically the most efficient if you work it out (Griewank's book has a nice proof that double reverse mode is never a good idea IIRC), so reverse mode over numerical is by far the most efficient approach. Now, it can hit some numerical issues if you're trying to converge to a very high accuracy, but we have noticed that even with embedding reverse-over-forward (which is more optimal than the reverse-over-reverse-over-reverse kind of thing we see other packages do, again, just do the proof yourself or see Griewank) you don't hit a much higher accuracy (PINNs seem to flatline at like 1e-3 or 1e-4 in any real case), so at that point, why not do numerical if it's faster at higher order?
I'll clarify this with examples in docs rather soon.
https://github.com/SciML/NeuralPDE.jl/blob/517eb0e986160d07c0a9eeef1df5d979e5da081c/src/pinns_pde_solve.jl#L109
Would you please show examples where autodifferentiation is used here?