maziarraissi / PINNs

Physics Informed Deep Learning: Data-driven Solutions and Discovery of Nonlinear Partial Differential Equations
https://maziarraissi.github.io/PINNs
MIT License
3.49k stars 1.22k forks source link

why the network has zero iterations in noiseless data? #6

Open ehtisham409 opened 4 years ago

nish-ant commented 4 years ago

Can you point to the specific code that you are referring to ?

ehtisham409 commented 4 years ago

https://github.com/maziarraissi/PINNs/blob/master/appendix/continuous_time_identification%20(Burgers)/Burgers.py

ehtisham409 commented 4 years ago

continuous_time_identification (Burgers)/Burgers.py

nish-ant commented 4 years ago

model.train(0)

For this specific example, when we have the noiseless data, the nIter = 0 means that no Adam optimization will be performed and the model is trained using L-BFGS only (using the arguments under options in ScipyOptimizerInterface). You can test by increasing the number of iteration (say nIter = 10000) but it will not improve the result significantly(?).

model.train(10000)

Because of the noise, the valley is more difficult to find compared to the noiseless data. Therefore, for the noised data, a Adam optiimization precedes the L-BGFS method.

See https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough#create_an_optimizer for more detail on optimizers.

ehtisham409 commented 4 years ago

for simple ODEs can i consider the case as no noise?

nish-ant commented 4 years ago

A good test would be to start with nIter=0 and then increase it until you are satisfied with the result