PredictiveIntelligenceLab / Physics-informed-DeepONets

270 stars 87 forks source link

Generate Training Data #2

Closed L-z-Chen closed 2 years ago

L-z-Chen commented 2 years ago

Hi Sifan,

Thank you for making this code public, I am reading the code in PI_DeepONet_DR.ipynb.

As for the training data generation process in Physics-informed-DeepONets/Diffusion-reaction/PI_DeepONet_DR.ipynb. I think s_train = np.zeros((P, 1)) might be incorrect.
It should be

    s_bc1 = np.zeros((P // 3, 1))
    s_bc2 = np.zeros((P // 3, 1))
    s_bc3 = u
    s_train = np.vstack((s_bc1, s_bc2, s_bc3))

Code Snippet

# Geneate training data corresponding to one input sample
def generate_one_training_data(key, P, Q):
    # Numerical solution
    (x, t, UU), (u, y, s) = solve_ADR(key, Nx , Nt, P, length_scale)

    # Geneate subkeys
    subkeys = random.split(key, 4)

    # Sample points from the boundary and the inital conditions
    # Here we regard the initial condition as a special type of boundary conditions
    x_bc1 = np.zeros((P // 3, 1))
    x_bc2 = np.ones((P // 3, 1))
    x_bc3 = random.uniform(key = subkeys[0], shape = (P // 3, 1))
    x_bcs = np.vstack((x_bc1, x_bc2, x_bc3))

    t_bc1 = random.uniform(key = subkeys[1], shape = (P//3 * 2, 1))
    t_bc2 = np.zeros((P//3, 1))
    t_bcs = np.vstack([t_bc1, t_bc2])

    # Training data for BC and IC
    u_train = np.tile(u, (P,1))
    y_train = np.hstack([x_bcs, t_bcs])
    s_train = np.zeros((P, 1))

    # Sample collocation points
    x_r_idx= random.choice(subkeys[2], np.arange(Nx), shape = (Q,1))
    x_r = x[x_r_idx]
    t_r = random.uniform(subkeys[3], minval = 0, maxval = 1, shape = (Q,1))

    # Training data for the PDE residual
    u_r_train = np.tile(u, (Q,1))
    y_r_train = np.hstack([x_r, t_r])
    s_r_train = u[x_r_idx]

    return u_train, y_train, s_train, u_r_train, y_r_train, s_r_train
# Define boundary loss
def loss_bcs(self, params, batch):
    inputs, outputs = batch
    u, y = inputs

   # Compute forward pass
    s_pred = vmap(self.operator_net, (None, 0, 0, 0))(params, u, y[:,0], y[:,1])
   # Compute loss
    loss = np.mean((outputs.flatten() - s_pred)**2)
    return loss
sifanexisted commented 2 years ago

Hello,

Thank you for interest. We appreciate your time and help. However, I think our code is correct at this point since in that diffusion-reaction example, we assume zero initial and boundary condition and the PDE is given by

s_t - D * s_xx - k * s^2 = u

Here please note that u is the PDE parameter and we want to learn the map from the source term u to the PDE solution. Therefore, s_train = np.zeros((P, 1)) should be correct.

L-z-Chen commented 2 years ago

Thanks, Sifan.