Closed LoveFrootLoops closed 1 year ago
Hi! Thank you for the hint, and indeed you are right :) We are already implementing this in #85 , and the feature is going to be available when we release the beta version.
Hey @LoveFrootLoops ! We added the closure possibility in #104 and it will be available in the next release (beta version). If you are curious you can try it out. I have used LBFGS optimiser to correctly train (and faster) a simple Poisson model. Please note that the documentation is not yet ready since it is still a draft pull request, but thanks to your suggestion now everyone will be able to use advanced optimiser.
Hi @LoveFrootLoops! We added some new features and you should be able to use now the new v0.1 version for optimising with LBFGS. Since there are no tutorial yet I give you a snippet of commented code to see how to train a Poisson Problem (we haven't changed the definition of problem classes from current version).
Here is the code:
import torch
from pina.problem import SpatialProblem
from pina.operators import nabla
from pina.geometry import CartesianDomain
from pina import Condition, LabelTensor, PINN
from pina.trainer import Trainer
from pina.model import FeedForward
from pina.equation.equation import Equation
from pina.equation.equation_factory import FixedValue
from pina.plotter import Plotter
# define laplace equation
def laplace_equation(input_, output_):
force_term = (torch.sin(input_.extract(['x'])*torch.pi) *
torch.sin(input_.extract(['y'])*torch.pi))
nabla_u = nabla(output_.extract(['u']), input_)
return nabla_u - force_term
class Poisson(SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
conditions = {
'gamma1': Condition(
location=CartesianDomain({'x': [0, 1], 'y': 1}),
equation=FixedValue(0.0)),
'gamma2': Condition(
location=CartesianDomain({'x': [0, 1], 'y': 0}),
equation=FixedValue(0.0)),
'gamma3': Condition(
location=CartesianDomain({'x': 1, 'y': [0, 1]}),
equation=FixedValue(0.0)),
'gamma4': Condition(
location=CartesianDomain({'x': 0, 'y': [0, 1]}),
equation=FixedValue(0.0)),
'D': Condition(
input_points=LabelTensor(torch.rand(size=(100, 2), requires_grad=True), ['x', 'y']),
equation=Equation(laplace_equation)),
}
def poisson_sol(self, pts):
return -(
torch.sin(pts.extract(['x'])*torch.pi) *
torch.sin(pts.extract(['y'])*torch.pi)
)/(2*torch.pi**2)
truth_solution = poisson_sol
# make the problem
poisson_problem = Poisson()
model = FeedForward(len(poisson_problem.input_variables),len(poisson_problem.output_variables))
# make the solver
solver = PINN(problem = poisson_problem, model=model, optimizer=torch.optim.LBFGS)
# sample points
boundaries = ['gamma1', 'gamma2', 'gamma3', 'gamma4']
n = 10
poisson_problem.discretise_domain(n, 'grid', locations=boundaries)
# train the model (ONLY CPU for now, all other devises in the official release)
trainer = Trainer(solver=solver, kwargs={'max_epochs' : 500, 'accelerator':'cpu'})
trainer.train()
# plotter
plotter = Plotter()
plotter.plot(solver)
And here the sample of the output:
Hope you find it helpful! We plan to also start including new models of PINNs and Neural Operators. If you would like to contribute look at #85 and contact us. I hope this helps and solves your problems. Cheers :)
Hi @dario-coscia,
I'm thrilled to see that the latest improvements have yielded such great results. Wow!
I have another request or suggestion that could potentially enhance the training process significantly. I was wondering if it's possible to directly incorporate boundary conditions into the neural network. By doing so, we wouldn't need to train the network on trivial boundary conditions separately, as they would already be satisfied by construction.
The modified approach would look something like this:
def u_mod(input_, output_):
return input_.extract(['x'])*(1-input_.extract(['x']))*\
input_.extract(['y'])*(1-input_.extract(['y']))*\
output_.extract(['u'])
def laplace_equation(input_, output_):
output_.extract(['u']) = u_mod(input_, output_) ## THIS IS NOT WORKING SO FAR
force_term = (torch.sin(input_.extract(['x'])*torch.pi) *
torch.sin(input_.extract(['y'])*torch.pi))
nabla_u = nabla(output_.extract(['u']), input_)
return nabla_u - force_term
class Poisson(SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
conditions = {
'D': Condition(
input_points=LabelTensor(torch.rand(size=(100, 2), requires_grad=True), ['x', 'y']),
equation=Equation(laplace_equation)),
}
As a result, the implementation of "complicated" boundary conditions would be the only requirement, leading to reduced computational effort for the neural network.
Yes @LoveFrootLoops ! This is sometimes a very nice strategy indeed. You can already do it easily with the new v0.1 version. The idea is to define your own NN model
, and impose the constrains in the forward. Here I show you and example with same network and optimizer as the example above.
import torch
from pina.problem import SpatialProblem
from pina.operators import nabla
from pina.geometry import CartesianDomain
from pina import Condition, LabelTensor, PINN
from pina.trainer import Trainer
from pina.model import FeedForward
from pina.equation.equation import Equation
from pina.equation.equation_factory import FixedValue
from pina.plotter import Plotter
# define laplace equation
def laplace_equation(input_, output_):
force_term = (torch.sin(input_.extract(['x'])*torch.pi) *
torch.sin(input_.extract(['y'])*torch.pi))
nabla_u = nabla(output_.extract(['u']), input_)
return nabla_u - force_term
# define a costum torch model
class hardMLP(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 20),
torch.nn.Tanh(),
torch.nn.Linear(20, 20),
torch.nn.Tanh(),
torch.nn.Linear(20, output_dim))
# here in the foward we implement the hard constraints
def forward(self, x):
return x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))*self.layers(x)
class Poisson(SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
conditions = {
'D': Condition(
input_points=LabelTensor(torch.rand(size=(100, 2), requires_grad=True), ['x', 'y']),
equation=Equation(laplace_equation)),
}
def poisson_sol(self, pts):
return -(
torch.sin(pts.extract(['x'])*torch.pi) *
torch.sin(pts.extract(['y'])*torch.pi)
)/(2*torch.pi**2)
truth_solution = poisson_sol
# make the problem
poisson_problem = Poisson()
model = hardMLP(len(poisson_problem.input_variables),len(poisson_problem.output_variables))
# make the solver
solver = PINN(problem = poisson_problem, model=model, optimizer=torch.optim.LBFGS)
# train the model (ONLY CPU for now, all other devises in the official release)
trainer = Trainer(solver=solver, kwargs={'max_epochs' : 250, 'accelerator':'cpu'})
trainer.train()
# plotter
plotter = Plotter()
plotter.plot(solver)
This gives you the following output:
As you can see we obtain a way lower l2 error! These are just fews of the new capabilities we are including. When releasing the beta version we will make an in depth review and many tutorials, so that everyone can start easily his/her journey into PINA! Maybe, are you interested in making a tutorial on these hard constrains 😃?
Hope it was helpful for you, and thanks again for the useful feedbacks 🚀
@LoveFrootLoops I close the issue since it was related with the closure function, which we implemented in v0.1 and will release soon. Thanks again for the feedbacks, and hope to hear from you soon for collaboration, if interested!
In order to use optimizer like LBFGS the training procesure needs to provide the closure function that performs a forward, zero_grad and backward on the model. While this function may be optional for certain optimizers, it is highly recommended to utilize it for every optimizer.
Here is an example without closure:
And this one with closure: