Open finmod opened 3 years ago
Indeed, we need to expand SciMLBenchmarks to showcase more methods. Physics-informed neural networks though will not be the one that looks good haha.
Do we already have the Lorenz inverse example somewhere in the works?
Yes, we have that old one we worked on.
https://benchmarks.sciml.ai/html/ParameterEstimation/LorenzParameterEstimation.html
The "short" version matches the DeepXDE one:
https://github.com/lululxvi/deepxde/blob/v0.11.2/examples/Lorenz_inverse_Colab.ipynb
DeepXDE takes 362.351454 s
while the global optimization takes 1.1s
or local in 0.03s
. That's a total 10,000x overhead to do the same parameter estimation for Physics-Informed Neural Networks, but that's to be expected since the method is crazy computationally expensive (but very versatile).
I'm waiting for a new devops person to fixup our RebuildAction so we can start auto-building the benchmarks again, in which case I want to start expanding them out. I might see if I can propose a bounty with some of the SciML funds to get community help in expanding the benchmarks too. Or see if it can be a master's thesis paper topic for someone.
DataDrivenDiffEq should demonstrate the SciML ecosystem tools and solutions to the same examples that are proposed in deepxde https://github.com/lululxvi/deepxde/tree/master/examples in Python.
deepxde has recently reciprocated with the Lotka-Volterra inverse example and it would be a great way to showcase and benchmark SciML tools: ModelingToolkit, DiffEqFlux, NeuralPDE, UDE, AD etc. against the Python solution for all the other examples. Do we already have the Lorenz inverse example somewhere in the works?