Motivated by Raissi's PINN model and provided data for exact solution of Burger's equation, I want to implement the inversion/identification problem in SciANN. Following and adapting the provided SciANN-example for the Navier-Stokes-Inversion I came up with attached code for Burger's. However the network yields horrible results: lambda1 = 0.03 and lambda2 = 4e-0.5 (using Adam optimizer, 10000 epochs and lr = 0.001) which is far from the exact lambda1 = 1 and lambda2 = 0.01/pi.
Besides the difference to Raissi using a L-BFGS-B optimizer, there must be some problem with my code/data reshaping I do not see.
Here's my code:
import numpy as np
import sciann as sn
import matplotlib.pyplot as plt
import scipy.io
def prepData(n):
#Import Data from Raissi
data = scipy.io.loadmat('burgers_shock.mat')
U_star = data['usol']
t_star = data['t']
X_star = data['x']
#Dimensions
N = X_star.shape[0]
T = t_star.shape[0]
#Reshape
xx = np.tile(X_star[:,0:1], (1,T)) # N x T
tt = np.tile(t_star, (1,N)) # N x T
#Randomly pick n exact solution data out of 256x100=25600
idx = np.random.choice(N*T, n, replace=False)
x = xx.flatten()
t = tt.flatten()
u = U_star.flatten()
return (x,t,u, idx)
Motivated by Raissi's PINN model and provided data for exact solution of Burger's equation, I want to implement the inversion/identification problem in SciANN. Following and adapting the provided SciANN-example for the Navier-Stokes-Inversion I came up with attached code for Burger's. However the network yields horrible results: lambda1 = 0.03 and lambda2 = 4e-0.5 (using Adam optimizer, 10000 epochs and lr = 0.001) which is far from the exact lambda1 = 1 and lambda2 = 0.01/pi.
Besides the difference to Raissi using a L-BFGS-B optimizer, there must be some problem with my code/data reshaping I do not see.
Here's my code:
import numpy as np import sciann as sn import matplotlib.pyplot as plt import scipy.io
def prepData(n):
Generate Data
x_train, t_train, u_train, ids = prepData(2000) input_data = [x_train[ids], t_train[ids]]
sample_u_ex = u_train[ids] sample_u_ex = sample_u_ex.reshape(-1,1)
u_train.reshape(-1,1) x = sn.Variable("x", dtype='float64') t = sn.Variable("t", dtype='float64')
u = sn.Functional("u", [x,t], 8*[20], 'tanh')
lambda1 = sn.Parameter(val = 0, inputs=[x,t], name="lambda1") lambda2 = sn.Parameter(val = -6.0, inputs=[x,t], name="lambda2")
Gradient Layer
u_t = sn.utils.grad(u,t) u_x = sn.utils.grad(u,x) u_xx = sn.utils.grad(u,x, order=2)
PINN
Loss_f = u_t + lambda1 u u_x - lambda2*u_xx pinn = sn.SciModel(inputs=[x,t], targets=[u,Loss_f], loss_func="mse", optimizer="Adam")
pinn.train( input_data, [sample_u_ex,'zeros'], learning_rate = 0.001, epochs=10000 )
print("lambda1: {}, lambda2: {}".format(lambda1.value, lambda2.value))
Any help is very much appreciated !