lululxvi / deepxde

A library for scientific machine learning and physics-informed learning
https://deepxde.readthedocs.io
GNU Lesser General Public License v2.1
2.64k stars 741 forks source link

qeustions about hard bc #589

Open q769855234 opened 2 years ago

q769855234 commented 2 years ago

I learned the hard bc of example diffusion_1d_exactBC, and your paper:PHYSICS-INFORMED NEURAL NETWORKS WITH HARD CONSTRAINTS FOR INVERSE DESIGN∗, but I still can not apply hard bc to to other problems, is there a general formula to define hard bc suitable for all problems?

lululxvi commented 2 years ago

Hard BC is problem dependent. You need to come up with the l(x) and g(x) for different problems. Also, see FAQ "Q: By default, initial/boundary conditions are enforced in DeepXDE as soft constraints. How can I enforce them as hard constraints?"

engsbk commented 2 years ago

I tried to run this example of hard constraints:

################################################################
########################HARD IC & BC############################
################################################################
def output_transform(x, u):
    x_in = x[:, 0:1]
    t_in = x[:, 1:2]
    freq = 1

    return  (1-x_in) * (x_in) * (t_in) * u + (1-x_in ) * tf.sin(4*np.pi * freq * t_in )

to apply:

u(0,t) = sin(4 pi f t), and u(1,t) = 0, u(x,0) = 0

In this domain:

geom = dde.geometry.Interval(0,1)
timedomain = dde.geometry.TimeDomain(0,1)
geomtime = dde.geometry.GeometryXTime(geom, timedomain)

For this wave PDE:


def pde(x, u):
    c = 1
    #Using new DeepXDE Jacobians and Hessians
    du_xx = dde.grad.hessian(u, x, i=0, j=0)
    du_tt = dde.grad.hessian(u, x, i=1, j=1)

    return   du_tt - (c ** 2)*(du_xx)

My PINN model:

d = 2 #dimensions
level = 3  #dataset density level
nx_train = 10 * 2 ** (level - 1)
data = dde.data.TimePDE(
        geomtime, pde, 
        [], 
        num_domain = nx_train**d, 
        num_boundary = 2 * d * nx_train, 
        num_initial = nx_train**d,
        train_distribution="Sobol"
    ) 
n = 5
activation = f"LAAF-{n} sin"
net = dde.maps.FNN([2] + [75] * 7 + [1], activation, "Glorot normal")
net.apply_output_transform(output_transform)
the_model = dde.Model(data, net)
model.append(the_model)
model[model_i].compile("adam", lr=4.9e-5)
model[model_i].train(epochs=10000)
dde.optimizers.set_LBFGS_options(ftol=1e-18, gtol=1e-18, maxiter=100000)
model[model_i].compile("L-BFGS-B")
losshistory, train_state = model[model_i].train()
dde.saveplot(losshistory, train_state, issave=True, isplot=True)

After running this example 10 times I got these results ( I used 10 trials to make sure it did not have to do with randomization): output0 output1 output2 output3 output4 output5

I'm not sure why the prediction looks a bit out of phase in the beginning. What would you recommend to fix this?

lululxvi commented 2 years ago

When t=0.2, it seems that PINN doesn't have the correct left BC, right? So the hard BC is not done correctly?

engsbk commented 2 years ago

What would you suggest to implement: u(0,t) = sin(4 pi f t), and u(1,t) = 0, u(x,0) = 0 using hard constraints?

lululxvi commented 2 years ago

The hard BC code seems correct, but the figure seems not correct. I suspect the FDTD solution doesn't satisfy the BC at t=0.2.

engsbk commented 2 years ago

I don't think it is the FDTD solution, because when I changed the PINN configuration to be as follows:

    d = 2 #dimensions
    level = 3  #dataset level
    nx_train = 10 * 2 ** (level - 1)
    data = dde.data.TimePDE(
        geomtime, pde, 
        [], 
        num_domain = nx_train**d, 
        num_boundary = 2 * d * nx_train, 
        num_initial = nx_train**d,
        train_distribution="uniform",
        anchors = observe_x
    ) 
    n = 5
    activation = "sin" #f"LAAF-{n} sin"
    net = dde.maps.FNN([2] + [100] * 6 + [1], activation, "Glorot normal")
    net.apply_output_transform(output_transform)
    the_model = dde.Model(data, net)
    model.append(the_model)
    model[model_i].compile("adam", lr=5e-4)#,  loss_weights=[10])
    model[model_i].train(epochs=50000)
    dde.optimizers.set_LBFGS_options(ftol=1e-18, gtol=1e-18, maxiter=100000)
    model[model_i].compile("L-BFGS-B")
    losshistory, train_state = model[model_i].train()#loss_weights=[10])
    dde.saveplot(losshistory, train_state, issave=True, isplot=True)

These are the plot results: u0 u1 u2 u3 u4 u5

I think there might be a way to enhance the results, but I'm not sure how. What would you recommend that I try?

lululxvi commented 2 years ago

It is hard to know simply from the plots how to enhance.

engsbk commented 2 years ago

Here is the code. Please have look and feel free to try it and let me know.

import deepxde as dde
from deepxde.backend import tf
import numpy as np
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
import matplotlib
print(tf.__version__)
dde.config.set_random_seed(12345)

def pde(x, u):
    c = 1
    du_xx = dde.grad.hessian(u, x, i=0, j=0)
    du_tt = dde.grad.hessian(u, x, i=1, j=1)
    return   du_tt - (c ** 2)*(du_xx) #- source

geom = dde.geometry.Interval(0,1)
timedomain = dde.geometry.TimeDomain(0,1)
geomtime = dde.geometry.GeometryXTime(geom, timedomain)

################################################################
########################HARD IC & BC############################
################################################################
def output_transform(x, u):
    x_in = x[:, 0:1]
    t_in = x[:, 1:2]
    freq = 1

    return  (1-x_in) * (x_in) * (t_in) * u + (1-x_in) * tf.sin(4 * np.pi * freq * t_in )

point_location = 0
x_source = np.zeros(100) + point_location
t_source = np.linspace(0, 1, num=100)
observe_x = np.vstack((x_source, t_source)).T

d = 2 #dimensions
level = 4  #dataset level
nx_train = 10 * 2 ** (level - 1)
data = dde.data.TimePDE(
    geomtime, pde, 
    [], 
    num_domain = nx_train**d, 
    num_boundary = 2 * d * nx_train, 
    num_initial = nx_train**d,
    train_distribution="uniform",
    anchors = observe_x
) 
n = 3
activation = f"LAAF-{n} sin"
net = dde.maps.FNN([2] + [100] * 6 + [1], activation, "Glorot normal")
net.apply_output_transform(output_transform)
the_model = dde.Model(data, net)
resampler = dde.callbacks.PDEResidualResampler(period=10)
the_model.compile("adam", lr=5e-4, decay=("inverse time", 2000, 0.9))
the_model.train(epochs=50000, callbacks=[resampler])
dde.optimizers.set_LBFGS_options(ftol=1e-18, gtol=1e-18, maxiter=100000)
the_model.compile("L-BFGS-B")
losshistory, train_state = the_model.train(callbacks=[resampler])
dde.saveplot(losshistory, train_state, issave=True, isplot=True)

data_file = 'Wave1D_0to1'
######Verifying values and shapes########
data = np.load('dataset/'+data_file+'.npz')
t, x, exact = data["t_input"], data["x_input"], data["u_output"]
xx, tt = np.meshgrid(x,t)
X = np.vstack((np.ravel(xx), np.ravel(tt))).T
u = exact.flatten()[:, None]
print("number of time steps", len(t))
print("extracted x", X[:,0:1].shape)
print("extracted t", X[:,1:2].shape)
print("X", X.shape)
print("u", u.shape)

def gen_testdata():
    data = np.load('dataset/'+data_file+'.npz')
    t, x, exact = data["t_input"], data["x_input"], data["u_output"]
    xx, tt = np.meshgrid(x,t)

    X = np.vstack((np.ravel(xx), np.ravel(tt))).T
    u = exact.flatten()[:, None]
    return X, u

x1 = x
t1 = t
xx1, tt1 = np.meshgrid(x1, t1)
X1 = np.vstack((np.ravel(xx1), np.ravel(tt1))).T
X_star1 = np.hstack((xx1.flatten()[:,None], tt1.flatten()[:,None]))

    u_pred=the_model.predict(X1)
    U_pred=griddata(X_star1, u_pred.flatten(), (xx1, tt1), method='cubic'))

snaps_num = 5      # without first and last
snaps = [0]        #first slice
for k in range(1,snaps_num):
    snaps += [int((k*(len(t1)-2)/snaps_num))]
snaps += [int(len(t1)-2)] #last slice
print("snapshots at:", snaps)

for snap in snaps:
    exa_p = plt.plot(x1,exact[snap,:],'r--', label = 'FDTD')
    pred = U_pred
    pred_p = plt.plot(x1,pred[snap,:], label = 'PINN: Uniform')
    plt.title('$t = %1.3f s$ '%(t[snap]), fontsize = 15)
    plt.legend(prop={'size':12})
    plt.show()
lululxvi commented 2 years ago

The code looks OK. Not sure what the problem is. Maybe you need to tune the hyperparameters.

MINE0126 commented 2 years ago

hello, @q769855234 Can you apply hard constraints to other problems? Is there any solution about the apply of hard constraints? Looking forward to your reply!