maziarraissi / PINNs

Physics Informed Deep Learning: Data-driven Solutions and Discovery of Nonlinear Partial Differential Equations
https://maziarraissi.github.io/PINNs
MIT License
3.45k stars 1.21k forks source link

tf.exp() for lambda_2 in KdV function #34

Open cruzchue opened 3 years ago

cruzchue commented 3 years ago

This question has been asked before by someone else, but it stayed unanswered. I am wondering about the answer myself, as I was reusing the KvD.py code with a different equation and not getting the parameters.

The KdV.py example estimates two parameters : lambda_1 and lambda_2. Both are coefficients that go into the differential operator (F) as:

F = -lambda_1 U U_x - lambda_2 U_xxx

that is, lambda_1 multiplies the solution (U) and its first derivate (U_x), lambda_2 multiplies the third derivative (U_xxx). However, within the functions net_U0 and net_U1, the coefficient lambda_2 has the operation exp applied to it.

I mean, lambda_1 = self.lambda_1, but lambda_2 = tf.exp(self.lambda_2). Perhaps I am missing something. It should not be lambda_2 = self.lambda_2 instead?

andrewforde1 commented 2 years ago

Hi Eduardo,

This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term

Hope this helps, Andrew

Schrodinger-E commented 2 years ago

Hi Eduardo,

This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term

Hope this helps, Andrew

But for the unknown coefficients of the governing equation, we cannot determine positive and negative a priori.

andrewforde1 commented 2 years ago

Hi Eduardo, This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term Hope this helps, Andrew

But for the unknown coefficients of the governing equation, we cannot determine positive and negative a priori.

I'm not familiar with the KdV equation but if you look at the paper "Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations" this might be due to the difference between the Burgers equation and KdV.

Burgers: u_t + lambda_1 * uu_x - lambda_2 * u_xx = 0

KdV: u_t + lambda_1 * uu_x + lambda_2 * u_xxx = 0

They both use the same form for the nonlinear operator: N[u] = lambda_1 * term1 - lambda_2 * term2 So keeping lambda_2 positive may be because the second term is added rather than subtracting. As I said, I don't know anything about these equations so could be completely wrong but hopefully this helps

Schrodinger-E commented 2 years ago

Hi Eduardo, This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term Hope this helps, Andrew

But for the unknown coefficients of the governing equation, we cannot determine positive and negative a priori.

I'm not familiar with the KdV equation but if you look at the paper "Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations" this might be due to the difference between the Burgers equation and KdV.

Burgers: u_t + lambda_1 * uu_x - lambda_2 * u_xx = 0

KdV: u_t + lambda_1 * uu_x + lambda_2 * u_xxx = 0

They both use the same form for the nonlinear operator: N[u] = lambda_1 * term1 - lambda_2 * term2 So keeping lambda_2 positive may be because the second term is added rather than subtracting. As I said, I don't know anything about these equations so could be completely wrong but hopefully this helps

After testing, I guess the root cause is exp(-6.0)=0.0025. Then, for the Burgers equation, lambda_1=1.0 and lambda_2=exp(-5.75)=0.00318. For the kdV equation, lambda_1=1.0 and lambda_2=exp(-6.0)=0.0025. Good initial values are good for training.

cruzchue commented 2 years ago

thanks for your answers. The code has been pretty good for us. I mean, for cases such as (lambda_1 term1) + (lambda_2term2), which covers a lot of PDEs.