Open Vaibhavdixit02 opened 11 months ago
This translates over to the bayesian PINNs, where neither https://arxiv.org/pdf/2205.08304.pdf or https://arxiv.org/pdf/2003.06097.pdf mention the formulation of the likelihood for the parameter estimation case clearly.
How would the derivatives be calculated in cases where an equation contains multiple different differential operator terms? if im understanding it correctly you forward solve at each updated parameter p value? or is it a loss from collocation of data and the updated parameter values which is added with total loss?
I think the collocation loss would be more efficient and generalize to PDEs
Okay so you would take derivatives of the interpolations in that case?
I guess. But based on the above papers it looks like it is assumed that the data for the derivatives is sometimes directly available so I would use that if it's available
I have been confused about the parameter estimation of differential equation parameters here for a while now.
I think the current formulation of the problem is suboptimal. The loss function only takes into account the error of the neural network surrogate and data and doesn't utilize a loss function to capture the norm of the physics equation solution at the current parameters which is the typical formulation of this without a NN solver. I think this will improve the training of the surrogate as well since now both the data and physics equation would be converging.
There are two parts to this, which I think can be done independently:
if we only consider initial value problems and ignore the boundary conditions data and losses it could be added as a collocation loss and solved as an OptimizationProblem. It could be a nested solve to a loose tolerance or the losses added up together.
The case with boundary conditions could be formulated as a boundary value problem. Though that makes it a harder problem we have pretty fast solvers now.