lululxvi / deepxde

A library for scientific machine learning and physics-informed learning
https://deepxde.readthedocs.io
GNU Lesser General Public License v2.1
2.76k stars 760 forks source link

1D Terzaghi equation with NBC and RBC #60

Closed LanPeng-94 closed 4 years ago

LanPeng-94 commented 4 years ago

Hi Lu, Thank you very much for sharing this amazing tool. I am trying to use this to solve the Terzaghi equation. du/dt = Cv*(d2u/dx2), Two BCs are : u(0,t)=0,u'(H,t)=0 IC : u(x,0)=1 at all the points.

I want to ask you if my following code is written correctly?

Cv=4.86e-02 H=10

def pde(x, u):
    du_x = tf.gradients(u, x)[0]
    du_x, du_t = du_x[:, 0:1], du_x[:, 1:2]
    du_xx = tf.gradients(du_x, x)[0][:, 0:1]
    return du_t - Cv * du_xx

geom = dde.geometry.Interval(0, H)
timedomain = dde.geometry.TimeDomain(0, 1)
geomtime = dde.geometry.GeometryXTime(geom, timedomain)

def boundary_l(x, on_boundary):
    return on_boundary and np.isclose(x[0], 0)
def boundary_r(x, on_boundary):
    return on_boundary and np.isclose(x[0], H)

bc1 = dde.DirichletBC(
    geomtime, lambda x: np.zeros((len(x), 1)), lambda _, on_boundary: boundary_l
)
bc2 = dde.NeumannBC(
    geomtime, lambda x: np.zeros((len(x), 1)), lambda _, on_boundary: boundary_r
)
ic = dde.IC(
    geomtime, lambda x: np.full_like(x[:, 0:1], 1), lambda _, on_initial: on_initial
)

data = dde.data.TimePDE(
    geomtime, pde, [bc1,bc2,ic], num_domain=2540, num_boundary=80, num_initial=160,num_test=10000
)

net = dde.maps.FNN([2] + [32] * 3 + [1], "tanh", "Glorot normal")
model = dde.Model(data, net)

model.compile("adam", lr=1e-3)
model.train(epochs=50000)
model.compile("L-BFGS-B")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave=True, isplot=True)

My final result is strange

lululxvi commented 4 years ago

I looks good. What is your loss and results?

smao-astro commented 4 years ago

@979736316 I am wondering why is that your IC and BC is not consistent at (0, 0), which could make the problem do not have a solution and hard to train.

LanPeng-94 commented 4 years ago

@979736316 I am wondering why is that your IC and BC is not consistent at (0, 0), which could make the problem do not have a solution and hard to train.

Oh, I was wrong, u(0,0) should be 0, other values are 1. But I don’t know how to set it

qizhang94 commented 4 years ago

@979736316 I am wondering why is that your IC and BC is not consistent at (0, 0), which could make the problem do not have a solution and hard to train.

Oh, I was wrong, u(0,0) should be 0, other values are 1. But I don’t know how to set it

Hi, Do you know why in this case, in the terminal output, we have four values for train loss and four values for the test loss, but only one value of the test loss is zero? What did these values represent?

By the way, @lululxvi I still get very wired results, as @979736316 mentioned. Thanks for your help!

qizhang94 commented 4 years ago

@979736316 I am wondering why is that your IC and BC is not consistent at (0, 0), which could make the problem do not have a solution and hard to train.

Oh, I was wrong, u(0,0) should be 0, other values are 1. But I don’t know how to set it

This is what I have modified your initial condition, however, it doesn't improve the results. def fun_init(x): return np.full_like(x[:, 0:1], 1)*(1 - np.isclose(x[:, 0:1], 0)) ic = dde.IC(geomtime, fun_init, lambda _, on_initial: on_initial)

LanPeng-94 commented 4 years ago

I looks good. What is your loss and results? This is my final loss and results. image image

lululxvi commented 4 years ago

@979736316 You training loss is large. Check the FAQ "Q: I failed to train the network, e.g., the training loss is large." at https://deepxde.readthedocs.io/en/latest/user/questions_answers.html

lululxvi commented 4 years ago

@qizhang94

What is your wired results?

qizhang94 commented 4 years ago
PDE

Hi, Lu Thanks for your reply, I have attached an image which is a simpler version of the original PDE since the solution for original PDE is an infinite sum, here I just use the first term. It is similar to your 1d diffusion example, the only difference is that here I have adopted Neumann boundary condition. The code link is provided here (The format is messy if I directly attach code here):

https://drive.google.com/file/d/1VnhX5raAZrN8TBvQWVPhhMQb3aDMhwQ7/view?usp=sharing

As a result, the metric error is on the order of 1, which is not acceptable, but I don't know what goes wrong here. I am looking forward to your suggestions!

lululxvi commented 4 years ago

There is an error in your code about the BCs. It should be

bc1 = dde.DirichletBC(geomtime, lambda x: np.zeros((len(x), 1)), boundary_l)

Similarly, bc2.

LanPeng-94 commented 4 years ago

@979736316 You training loss is large. Check the FAQ "Q: I failed to train the network, e.g., the training loss is large." at https://deepxde.readthedocs.io/en/latest/user/questions_answers.html Thank you very much I have read the FAQ.Let's multiply the network output by 10.

net = dde.maps.FNN(...) net.outputs_modify(lambda x, y: y * 10) image

The final result has been improved, but it is still not perfect. Is there any way to improve it?

lululxvi commented 4 years ago

Can you try the following:

qizhang94 commented 4 years ago

There is an error in your code about the BCs. It should be

bc1 = dde.DirichletBC(geomtime, lambda x: np.zeros((len(x), 1)), boundary_l)

Similarly, bc2.

Hi, Lu, thanks a lot! Now the result is okay.

@979736316 , I have normalized the consolidation equation so the result is on the order of 1. Here is the link to my code, hope it can help you.

https://drive.google.com/file/d/1VnhX5raAZrN8TBvQWVPhhMQb3aDMhwQ7/view?usp=sharing

LanPeng-94 commented 4 years ago

您的有关BC的代码中有错误。它应该是

BC1  =  DDE。DirichletBC(geomtime,拉姆达 X:NP。零((len个(X),1)),boundary_l)

同样,bc2。

嗨,卢,非常感谢!现在结果还可以。

@ 979736316,我已经归一化了合并方程,因此结果大约为1。这是我的代码的链接,希望对您有所帮助。

https://drive.google.com/file/d/1VnhX5raAZrN8TBvQWVPhhMQb3aDMhwQ7/view?usp=sharing

Thank you very much for your help,I ran your code and got an error

AttributeError: 'FNN' object has no attribute 'apply_output_transform'

I changed this code *net.outputs_modify(lambda x, y: y x[:, 0:1])**

Are they the same?

lululxvi commented 4 years ago

They are the same, but you need to install the new version of DeepXDE

qizhang94 commented 4 years ago

Hi, Lu

Sorry for bothering you again, for the network output transform, if we have two unknowns, and they are both functions of time and position, and we want to have different transforms, do you think the following code's syntax is correct?

def output_transform(x, y): # Transform network output y_transformed = y Rhat, that = x[:, 0:1], x[:, 1:2] uhat, phat = y[:, 0:1], y[:, 1:2] return tf.stack([uhat that Rhat, phat that (1 - Rhat)])

lululxvi commented 4 years ago

y_transformed seems unused. Is it tf.stack(..., axis=1)?

qizhang94 commented 4 years ago

Thanks a lot! I will have a try.

qizhang94 commented 4 years ago

Hi, Lu

I may need your help again. Now we have two unknowns u and p, and they are functions of position x and time t. However, the DeepXDE solution shows that both u and p will never change with time, to emphasize that, we have added non-zero time-dependent source terms in the PDEs, the description is given in the following figure, and we also attach the figures generated by the DeepXDE and the code sharable link: https://drive.google.com/file/d/1GtvbuA8L1LD2n0qJ0cCQ8tqaam-CJhon/view?usp=sharing

PDE 2

Figure_1 Figure_2 Figure_3

I just don't know whether I have made something wrong in defining the coupled PDEs, so I will really appreciate it if you can give me some suggestions (I haven't tried network transformation here because the result is so strange, it looks like the PINN is not trained at all, but I have increased the weight for PDE loss, even though the results doesn't change at all).

lululxvi commented 4 years ago
qizhang94 commented 4 years ago

Hi, Lu

I intend to to design such IC, BC, and non-zeros source terms, in order to help me debug, unfortunately, I don't have a reference solution, but I think they should and must depend on t because we can argue if they don't depend on t, for the second PDE, we will be only left with the second term, then if we use the output of DeepXDE (p is linear in x), then the LHS and RHS will never equal to each other.

The loss for each component is given as follows, it is also very strange that why the loss in the terminal is on the order of 10^-4 or 10^-5, but in the figure, it is only on the order of 10^-2? Is it because of the loss_weight? In any case, the NN only learns the initial and boundary condition, but not the PDE.

Loss
lululxvi commented 4 years ago
qizhang94 commented 4 years ago

Thanks a lot, I will try your suggestion by first assuming the analytical solution and then get the RHS source terms.

I just want to make sure that I understand it correct: if we don't assume loss_weight, then the weight will be 1 by default? In addition, can you help me check there is no syntax error in my previous code (I have copied the link here): https://drive.google.com/file/d/1GtvbuA8L1LD2n0qJ0cCQ8tqaam-CJhon/view?usp=sharing

PDE 3

Thanks a lot!

Qi Zhang

lululxvi commented 4 years ago

Yes, by default the loss_weight is 1. The code looks good.

qizhang94 commented 4 years ago

Thanks a lot for your help!

LanPeng-94 commented 4 years ago

re is an error in your code about t

lululxvi commented 4 years ago

Looks good.

LanPeng-94 commented 4 years ago

看起来挺好的。

Hi Lu,I would like to ask you a question. In the case of "diffusion_1d" you gave, what does "test metric" mean after training? L2 relative error? If I want to know the ”Mean residual“ in this case, how should the code be executed?

lululxvi commented 4 years ago

Check the example of Burgers.py for Mean residual.

LanPeng-94 commented 4 years ago

检查Burgers.py示例中的平均残差。 Thank you very much, and I would like to ask you again whether the following functions defined in Burgers.py are finite difference numerical solutions. def gen_testdata(): data = np.load("dataset/Burgers.npz") t, x, exact = data["t"], data["x"], data["usol"].T xx, tt = np.meshgrid(x, t) X = np.vstack((np.ravel(xx), np.ravel(tt))).T y = exact.flatten()[:, None] return X, y

If I have an analytical solution, how to define X to solve Mean residual?

lululxvi commented 4 years ago

Yes, it is the reference solution from other numerical methods, e.g., finite difference.

X is the location where you want to check the accuracy. See "Q: How can I use a trained model for new predictions?" in the FAQ,