lululxvi / deepxde

A library for scientific machine learning and physics-informed learning
https://deepxde.readthedocs.io
GNU Lesser General Public License v2.1
2.75k stars 755 forks source link

How to implement custom boundary conditions, or even midway through the process, change the B.C.s in a 2D time dependent problem set? (e.g. in subsurface transport phenomenon problems, how to translate slug injection in to the model?) #646

Open MiladPnh opened 2 years ago

MiladPnh commented 2 years ago

@lululxvi Dear Prof. Lu, Here I just wanted to introduce myself and ask how you would apply a step function as a boundary condition midway trough your process? So if I want to make it a bit clearer, the problem is transport, the domain a 2D column, governed by only diffusion for now, and I was wondering how I could embed a slug injection as contaminant as an instantaneous boundary condition in 0<T=.5<1, on X = 0. I would really appreciate any comment or ideas on it since it seems model may have some challenges with derivation. Also I tried very naively the following way to see if it works, obviously did not:

def step(x, on_boundary): return on_boundary and np.isclose(x[1], .5 ) and np.isclose(x[0], 0 )

bc_l = dde.icbc.DirichletBC(geomtime, lambda x: 0, boundaryl) bc_m = dde.icbc.DirichletBC(geomtime, func, step) bcr = dde.icbc.DirichletBC(geomtime, lambda x: 0, boundaryr) ic = dde.icbc.IC( geomtime, lambda x: np.sin(nnp.pix[:, 0:1]/L), lambda , on_initial: on_initial )

Define the PDE problem and configurations of the network:

data = dde.data.TimePDE( geomtime, pde, [bc_l, bc_m, bc_r, ic], num_domain=50, num_boundary=50, num_initial=20, num_test=2540, train_distribution='LHS') ------------------------------------------------------Output----------------------------------------------

Warning: 2540 points required, but 2604 points sampled. Compiling model... Building feed-forward neural network... 'build' took 0.030702 s

'compile' took 0.409772 s

Initializing variables... Training model...

Step Train loss Test loss Test metric 0 [5.82e-02, 3.49e-02, nan, 2.94e-02, 5.03e-01] [4.69e-02, 3.49e-02, nan, 2.94e-02, 5.03e-01] []

Best model at step 0: train loss: inf test loss: inf test metric:

'train' took 0.562988 s

Compiling model... 'compile' took 0.344479 s

Training model...

Step Train loss Test loss Test metric 1 [4.89e-02, 2.51e-02, nan, 2.34e-03, 4.97e-01] [5.69e-02, 2.51e-02, nan, 2.34e-03, 4.97e-01] [] 1000 [4.45e-03, 7.73e-03, nan, 4.95e-03, 4.30e-01] 2000 [5.51e-03, 1.84e-03, nan, 3.59e-03, 4.07e-01] 3000 [9.40e-03, 4.63e-05, nan, 2.03e-03, 3.60e-01] 4000 [8.30e-03, 3.32e-04, nan, 3.20e-03, 3.29e-01] 5000 [1.16e-02, 5.34e-04, nan, 3.54e-03, 3.10e-01] 6000 [9.66e-03, 1.44e-04, nan, 3.10e-03, 3.05e-01] 7000 [1.57e-02, 3.09e-04, nan, 4.07e-03, 2.89e-01] 8000 [1.28e-02, 2.60e-04, nan, 4.98e-03, 2.79e-01] 9000 [1.46e-02, 2.19e-04, nan, 5.36e-03, 2.54e-01] 10000 [1.66e-02, 4.70e-04, nan, 7.41e-03, 2.41e-01] 11000 [1.39e-02, 1.13e-04, nan, 5.26e-03, 2.37e-01] 12000 [1.20e-02, 5.41e-05, nan, 6.72e-03, 2.28e-01] 13000 [1.28e-02, 6.83e-06, nan, 5.36e-03, 2.23e-01] 14000 [1.26e-02, 2.99e-05, nan, 5.32e-03, 2.16e-01] 15000 [1.90e-02, 4.29e-04, nan, 6.22e-03, 1.98e-01] 16000 [1.87e-02, 2.20e-04, nan, 7.35e-03, 1.90e-01] 17000 [1.85e-02, 2.80e-04, nan, 8.47e-03, 1.81e-01] 18000 [2.04e-02, 3.79e-04, nan, 9.15e-03, 1.73e-01] INFO:tensorflow:Optimization terminated with: Message: STOP: TOTAL NO. of f AND g EVALUATIONS EXCEEDS LIMIT Objective function value: nan Number of iterations: 3459 Number of functions evaluations: 18756 18757 [2.28e-02, 8.78e-05, nan, 8.54e-03, 1.63e-01] [2.40e+02, 8.78e-05, nan, 8.54e-03, 1.63e-01] []

Best model at step 0: train loss: inf test loss: inf test metric:

'train' took 74.550819 s

File ~/opt/anaconda3/lib/python3.9/site-packages/deepxde/utils/external.py:344, in save_best_state(train_state, fname_train, fname_test) 342 print("Saving test data to {} ...".format(fname_test)) 343 if y_test is None: --> 344 test = np.hstack((train_state.X_test, best_y)) 345 if best_ystd is None: 346 np.savetxt(fname_test, test, header="x, y_pred")

File :5, in hstack(*args, **kwargs)

File ~/opt/anaconda3/lib/python3.9/site-packages/numpy/core/shape_base.py:345, in hstack(tup) 343 return _nx.concatenate(arrs, 0) 344 else: --> 345 return _nx.concatenate(arrs, 1)

File :5, in concatenate(*args, **kwargs)

ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s)

lululxvi commented 2 years ago

Can you try the latest version of DeepXDE?

MiladPnh commented 2 years ago

@lululxvi Dear Prof. Lulu, many thanks for your time, yes I am using the latest version of DeepXDE which is dde.version : 1.1.4

Here I want to explain the Time dependent 2D Advection Diffusion Problem I am trying to build up in DeepXDE framework and as I am kinda new to python I am not sure which part I am doing wrong and I hope you could help me. So here I am initiating the flow from time = 1 to avoid inf values and using Neumann boundary conditions to reserve total mass. I am not sure if I am updating everything properly in order to switch from 1D to 2D.

In following you will find the code I am using and the error I get.

Screen Shot 2022-05-08 at 3 15 20 AM

def IsoDiff_eq_exact_solution(x, y, t): """ Returns the exact solution for a given x , y, and t

Parameters
----------
x : np.ndarray
y : np.ndarray
t : np.ndarray
"""

return (M0/((np.sqrt(2*np.pi*2*DL*t))*(2*np.pi*2*DT*t)))*np.exp(-(x-vx*t)**2/(4*DL*t) - y**2/(4*DT*t))

def sol(x): """ Returns the exact solution for a given x, y, and t

Parameters
----------
x : np.ndarray
y : np.ndarray
t : np.ndarray
"""

return (M0/((np.sqrt(2*np.pi*2*DL*x[:,2:3]))*(2*np.pi*2*DT*x[:,2:3])))*np.exp(-(x[:,0:1]-vx*x[:,2:3])**2/(4*DL*x[:,2:3]) - x[:,1:2]**2/(4*DT*x[:,2:3]))

def gen_exact_solution(): """ Generates exact solution for the Diff equation for the given values of x, y, and t. """

# Number of points in each dimension:
x_dim, y_dim, t_dim = (128, 50, 101)

# Bounds of 'x', 'y', and 't':
x_min, y_min, t_min = (1, 1, 1)
x_max, y_max, t_max = (L, B, 25.)

# Create tensors:
t = np.linspace(t_min, t_max, num=t_dim).reshape(t_dim, 1)
x = np.linspace(x_min, x_max, num=x_dim).reshape(x_dim, 1)
y = np.linspace(y_min, y_max, num=y_dim).reshape(y_dim, 1)
usol = np.zeros((x_dim, y_dim, t_dim)).reshape(x_dim, y_dim, t_dim)

# Obtain the value of the exact solution for each generated point:

for i in range(x_dim):
    for k in range(y_dim):
        for j in range(t_dim):
            usol[i][k][j] = IsoDiff_eq_exact_solution(x[i], y[k], t[j])

# Save solution:
np.savez('Diff_eq_data', x=x, y=y, t=t, usol=usol)
data = np.load('Diff_eq_data.npz')

def gen_testdata(): """ Import and preprocess the dataset with the exact solution. """

# Load the data:
data = np.load('Diff_eq_data.npz')

# Obtain the values for t, x, and the excat solution:
t, x, y, exact = data["t"], data["x"], data["y"], data["usol"].T

# Process the data and flatten it out (like labels and features):
xx, yy, tt = np.meshgrid(x, y, t)
X = np.vstack((np.ravel(xx), np.ravel(yy), np.ravel(tt))).T
z = exact.flatten()[:, None]

return X, z

def main():

def pde(x, y):
    """
    Expresses the PDE residual of the Diff equation.        
    """

    dy_x = tf.gradients(y, x)[0]
    dy_x = dy_x[:, 0:1]
    dy_y = dy_x[:, 1:2]
    dy_t = dy_x[:,2:3]
    dy_xx = tf.gradients(dy_x, x)[0][:, 0:1]
    dy_yy = tf.gradients(dy_y, x)[0][:, 1:2]

    return dy_t -(DL*dy_xx + DT*dy_yy) + vx*(dy_x + dy_y)

# Computational geometry:
geom = dde.geometry.Rectangle([1,1],[L,B])
timedomain = dde.geometry.TimeDomain(1, 25)
geomtime = dde.geometry.GeometryXTime(geom, timedomain)

def boundary_l(x, on_boundary):
    return on_boundary and np.isclose(x[0], 0 ) 

def boundary_r(x, on_boundary):  
    return on_boundary and np.isclose(x[0], L )

def boundary_d(x, on_boundary):
    return on_boundary and np.isclose(x[1], 0 ) 

def boundary_u(x, on_boundary):
    return on_boundary and np.isclose(x[1], B )

# Initial and boundary conditions:
bc = dde.icbc.NeumannBC(geomtime, lambda x: 0, lambda _, on_boundary: on_boundary)

ic = dde.icbc.IC(
    geomtime, lambda x: sol, lambda _, on_initial: on_initial
    )

# Define the PDE problem and configurations of the network:
data = dde.data.TimePDE(
    geomtime, pde, [bc, ic], num_domain=100, num_boundary=50, num_initial=50, num_test=5000,
    train_distribution='LHS', solution=sol,
)

nn = 10
activation = f"LAAF-{nn} relu" 
net = dde.nn.FNN([3] + [30] * 3 + [1], 'tanh', "Glorot normal")
model = dde.Model(data, net)

# Build and train the model:
model.compile("adam", lr=1e-3)
model.train(epochs=30000)
model.compile("L-BFGS")
losshistory, train_state = model.train()

# Plot/print the results
dde.saveplot(losshistory, train_state, issave=True, isplot=True)
X, y_true = gen_testdata()
y_pred = model.predict(X)
f = model.predict(X, operator=pde)
print("Mean residual:", np.mean(np.absolute(f)))
print("L2 relative error:", dde.metrics.l2_relative_error(y_true, y_pred))
np.savetxt("test.dat", np.hstack((X, y_true, y_pred)))
return np.hstack((X, y_true, y_pred)), f, losshistory, train_state

%matplotlib notebook

if name == "main":

"""Constants"""
M = 1 #Mass of solute,M, injected along the thickness b on an aquifer, line source
b = 1
phi = .3
M0 = (M/(b*phi))
DL = 1
DT = .2
vx = 1
# Problem parameters:
L = 20 # Length of the bar
B = 2
n = 1 # Frequency of the sinusoidal initial conditions
D = .2 # Diffusion Coefficient
 #M = 1 # mass M of contaminant introduced suddenly at t = 0 at the origin (x = 0) 
        # in an infinite (one-dimensional) medium with zero contaminant concentration

# Generate a dataset with the exact solution (if you dont have one):
gen_exact_solution()

# Solve the equation:
data, f, lh, ts = main()

_Error___ . . .

File ~/opt/anaconda3/lib/python3.9/site-packages/deepxde/utils/internal.py:86, in return_tensor..wrapper(*args, kwargs) 84 @wraps(func) 85 def wrapper(*args, *kwargs): ---> 86 return bkd.as_tensor(func(args, kwargs), dtype=config.real(bkd.lib))

File ~/opt/anaconda3/lib/python3.9/site-packages/deepxde/backend/tensorflow_compat_v1/tensor.py:76, in as_tensor(data, dtype) 74 return data 75 return tf.cast(data, dtype) ---> 76 return tf.convert_to_tensor(data, dtype=dtype)

File ~/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback..error_handler(*args, **kwargs) 151 except Exception as e: 152 filtered_tb = _process_traceback_frames(e.traceback) --> 153 raise e.with_traceback(filtered_tb) from None 154 finally: 155 del filtered_tb

File ~/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/framework/tensor_util.py:332, in _AssertCompatible(values, dtype) 330 raise TypeError("Expected any non-tensor type, but got a tensor instead.") 331 else: --> 332 raise TypeError(f"Expected {dtype.name}, but got {mismatch} of type " 333 f"'{type(mismatch).name}'.")

TypeError: Expected float32, but got <function sol at 0x7ff2c03b88b0> of type 'function'.

lululxvi commented 2 years ago

Could you first figure out which line(s) makes the error?

123new-net commented 2 years ago

@lululxvi Dear Prof. Lu, Now I have a similar problem. I want to expand the calculation in two dimensions of a square, but my boundary conditions only need to set the lower left and upper right corners, which are the points x=y=0 and x=y=1. Here is the code I wrote and the errors I encountered.The main problem is that when I set the boundary conditions it's hard to constrain x and y to be equal to the same value.

CODE

"""Backend supported: tensorflow.compat.v1, tensorflow, pytorch""" import deepxde as dde import numpy as np import matplotlib.pyplot as plt

建立PINN预测模型

miu= 1 Ct= 1

geom = dde.geometry.Rectangle(xmin=[0, 0], xmax=[1, 1]) timedomain=dde.geometry.TimeDomain(0,1) #时间从0到1 geomtime=dde.geometry.GeometryXTime(geom,timedomain) #合成时空区域

def pde(x, u): p,k = u[:, 0:1], u[:, 1:2]

dp_x = dde.grad.jacobian(u, x, i=0, j=0)
dp_y = dde.grad.jacobian(u, x, i=0, j=1)
dp_t = dde.grad.jacobian(u, x, i=0, j=2)
dp_xx = dde.grad.hessian(u, x, component=0, i=0, j=0)
dp_yy = dde.grad.hessian(u, x, component=0, i=1, j=1)
dp_xy = dde.grad.hessian(u, x, component=0, i=0, j=1)
dp_yx = dde.grad.hessian(u, x, component=0, i=1, j=0)

dk_x = dde.grad.jacobian(u, x, i=1, j=0)
dk_y = dde.grad.jacobian(u, x, i=1, j=1)

return miu*Ct*dp_t-(dk_x+dk_y)*(dp_x+dp_y)-k*(dp_xx+dp_yy+dp_xy+dp_yx)

def boundary_r(x, on_boundary): return on_boundary and (np.isclose(x[0],1) and np.isclose(x[1],1))#x[0]代表x,x[1]代表y def boundary_l(x, on_boundary): return on_boundary and (np.isclose(x[0],0) and np.isclose(x[1],0)) def func(x): return 1

bc_1 = dde.icbc.DirichletBC(geomtime,lambda x:1,boundary_r,component=0) bc_2 = dde.icbc.DirichletBC(geomtime,lambda x:0,boundary_l,component=0) ic1= dde.icbc.IC(geomtime,func,lambda ,on_initial: on_initial,component=0) ic2= dde.icbc.IC(geomtime,lambda x:1, lambda ,on_initial: on_initial,component=1)

data = dde.data.TimePDE(geomtime,pde,[bc_1,bc_2,ic_1,ic_2],num_domain=1000,num_boundary=80,num_initial=80,num_test=1000)

net = dde.nn.FNN([3] + 3 * [50] + [2], "tanh", "Glorot normal")

model = dde.Model(data, net)

model.compile("adam",lr=0.001) model.train(epochs=10000) model.compile("L-BFGS") #两种优化方法结合 losshistory, train_state = model.train() dde.saveplot(losshistory, train_state, issave=True, isplot=True)

ERROR Initializing variables... Training model...

0 [3.51e-02, nan, nan, 1.47e+00, 1.09e+00] [3.46e-02, nan, nan, 1.47e+00, 1.09e+00] []

Best model at step 0: train loss: inf test loss: inf test metric:

'train' took 0.568185 s

Compiling model... 'compile' took 0.407718 s

Training model...

Step Train loss Test loss Test metric 1 [2.06e-02, nan, nan, 1.23e+00, 9.42e-01] [1.99e-02, nan, nan, 1.23e+00, 9.42e-01] []
1000 [2.15e-08, nan, nan, 1.93e-08, 2.16e-08]
2000 [7.51e-10, nan, nan, 2.48e-10, 1.16e-09]
3000 [4.86e-10, nan, nan, 8.35e-11, 6.95e-10]
4000 [4.20e-10, nan, nan, 4.57e-11, 3.82e-10]
5000 [2.97e-10, nan, nan, 5.32e-11, 2.24e-10]
6000 [2.59e-10, nan, nan, 6.68e-11, 7.79e-11]
7000 [1.93e-10, nan, nan, 2.29e-11, 3.79e-11]
8000 [1.22e-10, nan, nan, 9.21e-12, 5.97e-12]
INFO:tensorflow:Optimization terminated with: Message: CONVERGENCE: NORM_OF_PROJECTEDGRADIENT<=_PGTOL Objective function value: nan Number of iterations: 1643 Number of functions evaluations: 8930 8931 [9.97e-11, nan, nan, 9.48e-12, 8.99e-12] [6.72e-11, nan, nan, 9.48e-12, 8.99e-12] []

Best model at step 0: train loss: inf test loss: inf test metric:

'train' took 101.359099 s

Saving loss history to C:\Users\Administrator\loss.dat ... Saving training data to C:\Users\Administrator\train.dat ... Saving test data to C:\Users\Administrator\test.dat ...

ValueError Traceback (most recent call last) C:\Users\Public\Documents\Wondershare\CreatorTemp/ipykernel_2848/792027461.py in 51 model.compile("L-BFGS") #两种优化方法结合 52 losshistory, train_state = model.train() ---> 53 dde.saveplot(losshistory, train_state, issave=True, isplot=True) 54

D:\conda\lib\site-packages\deepxde\utils\external.py in saveplot(loss_history, train_state, issave, isplot, loss_fname, train_fname, test_fname, output_dir) 173 test_fname = os.path.join(output_dir, test_fname) 174 save_loss_history(loss_history, loss_fname) --> 175 save_best_state(train_state, train_fname, test_fname) 176 177 if isplot:

D:\conda\lib\site-packages\deepxde\utils\external.py in save_best_state(train_state, fname_train, fname_test) 342 print("Saving test data to {} ...".format(fname_test)) 343 if y_test is None: --> 344 test = np.hstack((train_state.X_test, best_y)) 345 if best_ystd is None: 346 np.savetxt(fname_test, test, header="x, y_pred")

<__array_function__ internals> in hstack(*args, **kwargs) D:\conda\lib\site-packages\numpy\core\shape_base.py in hstack(tup) 344 return _nx.concatenate(arrs, 0) 345 else: --> 346 return _nx.concatenate(arrs, 1) 347 348 <__array_function__ internals> in concatenate(*args, **kwargs) ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s) I hope you can answer my questions at your leisure. Thank you very much!
lululxvi commented 2 years ago

It seems there are more than one issue in your code, as you see

0 [3.51e-02, nan, nan, 1.47e+00, 1.09e+00] [3.46e-02, nan, nan, 1.47e+00, 1.09e+00] []

there is nan.

You would better first locate precisely where the problem is, and then we can look at how to solve it.

forxltk commented 2 years ago

Seems that there are no sample points in (0, 0) or (1, 1) and the loss will be nan. You can sample those points manually as anchors. For example:

points1 = geomtime.random_points(500)
points2 = geomtime.random_points(500)
points1[:, 0] = np.zeros_like(points1[:, 0])
points1[:, 1] = np.zeros_like(points1[:, 0])
points2[:, 0] = np.ones_like(points1[:, 0])
points2[:, 1] = np.ones_like(points1[:, 0])
points = np.vstack((points1, points2))

data = dde.data.TimePDE(geomtime, pde, [bc_1, bc_2, ic_1, ic_2], num_domain=1000, num_boundary=80,
                        num_initial=80, num_test=1000, anchors=points)
123new-net commented 2 years ago

@lululxvi @forxltk Thank you very much for your advice. I updated my code based on your recommendation. That worked for me. Now I have encountered a new problem, I hope you can answer my doubts when you are free. My new model has two PDE equations, three boundary conditions and two initial conditions, and the training loss is shown below. The losses of the equation are particularly large, and the losses of BC and IC also fluctuate around 1. I went through FAQ about how to deal with massive training losses in training networks. I tried to study the model of small spatiotemporal region first, increase the number of points in the domain and increase the number of iterations. Unfortunately, I didn't get the results I wanted.

Step Train loss Test loss Test metric 0 [9.99e+03, 9.99e+03, 3.37e+02, 6.05e-01, 6.12e+02, 4.05e+02, 8.85e-03] [9.99e+03, 9.99e+03, 3.37e+02, 6.05e-01, 6.12e+02, 4.05e+02, 8.85e-03] []
1000 [8.89e+03, 8.89e+03, 5.10e+01, 1.16e+02, 1.95e+02, 1.48e+02, 9.66e+01] [8.99e+03, 8.99e+03, 5.10e+01, 1.16e+02, 1.95e+02, 1.48e+02, 9.66e+01] []
2000 [8.03e+03, 8.03e+03, 3.71e+01, 1.07e+02, 3.57e+01, 9.96e+01, 1.40e+02] [8.81e+03, 8.81e+03, 3.71e+01, 1.07e+02, 3.57e+01, 9.96e+01, 1.40e+02] []
3000 [7.50e+03, 7.50e+03, 2.28e+01, 8.36e+01, 1.05e+01, 6.96e+01, 2.80e+02] [8.27e+03, 8.27e+03, 2.28e+01, 8.36e+01, 1.05e+01, 6.96e+01, 2.80e+02] []
4000 [7.07e+03, 7.07e+03, 2.47e+01, 8.05e+01, 6.16e+00, 3.34e+01, 4.26e+02] [7.81e+03, 7.82e+03, 2.47e+01, 8.05e+01, 6.16e+00, 3.34e+01, 4.26e+02] []
5000 [6.70e+03, 6.70e+03, 1.94e+01, 7.82e+01, 4.60e+00, 1.48e+01, 5.39e+02] [7.44e+03, 7.44e+03, 1.94e+01, 7.82e+01, 4.60e+00, 1.48e+01, 5.39e+02] []
6000 [6.32e+03, 6.32e+03, 1.51e+01, 8.14e+01, 3.52e+00, 8.09e+00, 6.86e+02] [7.08e+03, 7.08e+03, 1.51e+01, 8.14e+01, 3.52e+00, 8.09e+00, 6.86e+02] []
7000 [5.99e+03, 5.99e+03, 7.50e+00, 7.43e+01, 6.12e-01, 2.66e+00, 7.65e+02] [6.78e+03, 6.78e+03, 7.50e+00, 7.43e+01, 6.12e-01, 2.66e+00, 7.65e+02] []
8000 [5.67e+03, 5.67e+03, 3.37e+00, 7.49e+01, 3.34e-01, 1.02e+00, 8.12e+02] [6.53e+03, 6.53e+03, 3.37e+00, 7.49e+01, 3.34e-01, 1.02e+00, 8.12e+02] []
9000 [5.35e+03, 5.35e+03, 2.04e+00, 7.95e+01, 3.82e-01, 5.37e-01, 8.52e+02] [6.31e+03, 6.31e+03, 2.04e+00, 7.95e+01, 3.82e-01, 5.37e-01, 8.52e+02] []
10000 [5.07e+03, 5.07e+03, 1.41e+00, 8.53e+01, 6.35e-01, 4.33e-01, 8.15e+02] [6.15e+03, 6.15e+03, 1.41e+00, 8.53e+01, 6.35e-01, 4.33e-01, 8.15e+02] []

Do you have any ideas on how to fix the issue?Any help you can offer will be greatly appreciated.

forxltk commented 2 years ago

You should rescale the problem and set the loss weights. See FAQ.

123new-net commented 2 years ago

@forxltk Thank you for your timely reply. I tried to set the following loss weights in the model, and the results improved a lot.

model.compile("adam",lr=1e-4,loss_weights=[100,100,1,1,1,1,1])

For the loss weights, I have two questions:

forxltk commented 2 years ago
123new-net commented 2 years ago

Hello @lululxvi , In the process of modeling, I found that the prediction results of the model only meet the boundary conditions when the time is small, but do not meet the boundary conditions when the time is long. So I applied hard boundary conditions to the model, and then there was a problem. For the dependent variable P, I need to set an IC and two BC as follows.

p(1,1,t)=1 p(0,0,t)=0 p(x,y,0)=1

This is my hard boundary condition, and there are only two conditions that can be satisfied. So the prediction results can only satisfy IC and one BC. def modify_output(X, u): x, y, t = X[:, 0:1], X[:, 1:2], X[:, 2:3] p, k = u[:, 0:1], u[:, 1:2] p_new = (x*y-1)*p*t+1 k_new = t*k+5 return tf.concat((p_new,k_new),axis=1)

I didn't come up with a hard boundary conditions that satisfy all three. I want to know can I have two hard boundary conditions for the same dependent variable, and if not, how can I have a hard boundary condition for my problem.

lululxvi commented 2 years ago

Your BC and IC are not consistent. For example, what is P(0,0,0)?

123new-net commented 2 years ago

@lululxvi Thanks for your help! I'm having problems constructing complex geometric areas. I want to build a rectangle area with a circle subtracted from the inside using the following code. rectanglar = dde.geometry.Rectangle(xmin=[0, 0], xmax=[5, 5]) circle = dde.geometry.Disk([2.5,4],0.5) geom = dde.geometry.CSGDifference(rectanglar,circle) timedomain=dde.geometry.TimeDomain(0,10) geomtime=dde.geometry.GeometryXTime(geom,timedomain) After the network is trained, I use the following code to see the predictions. But why is the result still a complete rectangle? x, y = np.meshgrid(np.linspace(0,5, 100), np.linspace(0,5, 100)) X = np.vstack((np.ravel(x), np.ravel(y))).T t_0 = np.zeros(10000).reshape(10000, 1) t_1 = np.ones(10000).reshape(10000, 1) X_0 = np.hstack((X, t_0)) X_1 = np.hstack((X, t_1)) z=model.predict(X_1) Z=z.reshape(100,100) plt.imshow(Z,extent=[0,5,0,5],origin='lower',cmap='jet') 1660726858190

Thanks again!

123new-net commented 2 years ago

@lululxvi Also, I would like to ask why the target loss is larger after setting the loss weights? The loss before setting the loss weights is as follows. 30000 [1.19e-03, 1.18e-03, 2.03e-01, 2.91e-02, 1.67e-03, 0.00e+00, 0.00e+00] [1.76e-04, 1.76e-04, 2.03e-01, 2.91e-02, 1.67e-03, 0.00e+00, 0.00e+00] Accordingly, I set the weight like loss_weights=[1,1,100,100,1,1,1], and then the loss of the network becomes as follows. The third and fourth loss more. Why is that? 30000 [1.36e-03, 1.34e-03, 1.73e+01, 1.93e+00, 1.26e-03, 0.00e+00, 0.00e+00] [4.32e-02, 4.31e-02, 1.73e+01, 1.93e+00, 1.26e-03, 0.00e+00, 0.00e+00]

lululxvi commented 2 years ago

But why is the result still a complete rectangle?

That is what you did via x, y = np.meshgrid(np.linspace(0,5, 100), np.linspace(0,5, 100)).

The third and fourth loss more. Why is that?

Because you used a weight of 100.

123new-net commented 2 years ago

@lululxvi Thanks for your help! I have realized my mistake. Please continue to help me solve the following problems.

Sorry to keep bothering you with childish questions. I really enjoy using DeepXde. I hope you can give me some help, thank you very much!

lululxvi commented 2 years ago
123new-net commented 2 years ago

@lululxvi Thank you for your answer. Now I have two more quetions.

Thanks again!

123new-net commented 2 years ago

image

Sorry I forgot to upload the picture earlier.

lululxvi commented 2 years ago