mathLab / PINA

Physics-Informed Neural networks for Advanced modeling
https://mathlab.github.io/PINA/
MIT License
355 stars 62 forks source link

DeepONet Example #27

Closed X52p closed 1 year ago

X52p commented 1 year ago

Is your feature request related to a problem? Please describe. I'm currently trying to use physics informed DeepONets, and I'm struggling to get them working.

Describe the solution you'd like It would be really nice, if there was a small example, that demonstrates how to use them.

ndem0 commented 1 year ago

Deat @X52p, the example code in the docstring of the class itself is not working?

branch = FFN(input_variables=['a', 'c'], output_variables=20)
trunk = FFN(input_variables=['b'], output_variables=20)
onet = DeepONet(trunk_net=trunk, branch_net=branch
                output_variables=output_vars)

That class, as the entire package, is still in the alpha stage, so there can be errors/bugs anyway.

X52p commented 1 year ago

This code is working, (so I can create a DeepONet) but I'm struggling to apply it to a parametric problem.

ndem0 commented 1 year ago

Ok but what error are you obtaining? Could you provide a minimal script?

X52p commented 1 year ago

With the PINN class I'm not getting an Error, but I can't get it to learn anything, so I think I'm using the library wrong.

(I get an error though when using the ParametricPINN class)

Error with ParametricPINN ``` File \"REDACTED\", line 640, in pinn.train(5000, 100) File "REDACTED\lib\site-packages\pina\ppinn.py", line 114, in train predicted = self.model(pts.tensor) AttributeError: 'LabelTensor' object has no attribute 'tensor' ```

my code looks something like this:

Code ```python branch = FFN(input_variables=problem.spatial_variables, output_variables=hidden, inner_size=hidden, n_layers=layers, func=torch.nn.ReLU) trunk = FFN(input_variables=problem.parameters, output_variables=hidden, inner_size=hidden, n_layers=layers, func=torch.nn.ReLU) onet = DeepONet(trunk_net=trunk, branch_net=branch, output_variables=problem.output_variables) pinn = PINN(problem, onet, lr=1e-4, device='cuda') pinn.span_pts(20, 'grid', ['D1', 'D2']) pinn.span_pts(20, 'grid', ['gamma1', 'gamma2', 'gamma3']) pinn.train(5000, 100) ```

I can prepare a full example if you want, but I will need some time for this.

I'm also a bit confused how I would use time dependent parametric problem: would I combine the parameters and the time and feed both into the trunk?

ndem0 commented 1 year ago

Ok, thanks! The problem I see is you're actually using the ParametricPINN class, which is not supported anymore. It's our fault since the documentation is still very poor and we also forgot to delete that file (now removed in #28), so I apologize.

The rest of the code looks fine, my suggestion is to always start by using device='cpu' instead of cuda since it's easier to debug at the beginning.

Related to your last question, things become more tricky but interesting: the current definition of DeepONet is the one from Karniakadis et all, meaning the network is forced to have the trunk (stacked or unstacked) and the branch. But potentially we can have more than 2 networks whose output is multiplied in an element-wise fashion. This solution is not implemented yet, but we can discuss if you are interested!

X52p commented 1 year ago

Hi, Tanks! I now had success using the PINN class. For the time dependency in combination with parameters: I just used time as a part of the spatial domain, and it worked fine. But using a third network could indeed be interesting and maybe would make it train faster. (Currently I need to use a very low learning rate and a high number of samples to get usable results)

Also, it would be really useful to be able to train minibatches, because (in my case) increasing the grid size lowers the error drastically, but because of the high dimensional input, it reaches the GPU memory limit. (I didn't find a way to do this) Running on CPU takes days, so this is not an option for me.

ndem0 commented 1 year ago

Ok great to see that! We can try to generalize the DeepONet for an arbitrary number of branches, it should be something not really time-consuming.

Yes, agree, the batching would be a nice-to-have feature (at the moment still not implemented). I open the issue in order to keep it in mind, stay tuned for new releases!