matthiasnwt / fast-poisson-solver

The Poisson equation is an integral part of many physical phenomena, yet its computation is often time-consuming. This module presents an efficient method using physics-informed neural networks (PINNs) to rapidly solve arbitrary 2D Poisson problems.
GNU General Public License v3.0
18 stars 1 forks source link

Scaling matrix size #2

Open jtbr opened 5 months ago

jtbr commented 5 months ago

Hello, Thanks for this library; I like your approach to the problem. But I was testing and it seems to be using more GPU memory than I expected?

If I simply change the grid_num in the examples/testrun.py file to 600, precompute()runs out of memory on a 24GB GPU card [in grad() called from calculate_laplace(), or for larger matrices, withinPINN.h()]. I actually need a rectangular grid of about 2300 x 800 for my problem (which corresponds in size to a square of about 1350), so it's not even close. I'm surprised by this. If I were able to precompute on a larger GPU, would solve() need less memory so I could perhaps continue to use the 24GB card?

Are there any other settings I should change to reduce the memory needed for precompute()? Would they sacrifice performance?

Also, there is a note in the README about adding support for interior boundary conditions in the future, but it seems this is already supported. Is that correct?

Many thanks

jtbr commented 5 months ago

I ran a test to see how memory scales with grid_size. 400x400 is about the largest I could fit on my 24GB GPU. It does appear that Precompute uses by far the most memory, so if I could manage to precompute, the solver would work great for me. But even with an 80GB GPU doesn't look like I could precompute with my problem size. See image below. Not sure if there's any way to reduce memory usage during precompute(). image

matthiasnwt commented 5 months ago

Hi, thanks for your interest in my work. Your observations are correct. The pre-compute step is the most intensive step and it scales quadratic. Once the pre-compute is done, you could easily solve large grids efficiently but the pre-compute has to be done once.

In the current implementation, there is nothing you can change to make it fit on a 24 GB GPU. Even on larger GPUs you will run out of memory very soon.

Unfortunately, I am not sure if I will have the time to look into it more closely and find a solution.

Do you want to solve the Poisson equation for your 2300 x 800 very often or just once?

One thing you should keep in mind: the neural network serves as basis functions, however, they are trained on much smaller grids. I am not sure how well they can approximate the solution of your much larger grid.

jtbr commented 5 months ago

Many thanks for your input. I would be solving the equation many times with the same PDE and boundary condition domains, which is why the approach suits me. But indeed that's an important caveat if the neural network is trained on much smaller grids and might not be able to do a precise-enough approximation at this higher resolution.

matthiasnwt commented 5 months ago

My method uses 700 basis functions, for a grid with almost 2 million points (2300 x 800) this could be too few. The maximum I tried where 160K points (400 x 400) and the accuracy started to reduce. Solving the Poisson equation for a grid with millions of points is a interesting research question. I think it would be possible to scale up my approach but the training and pre-compute will always scale quadratic with my approach, only the interference is constant. I think you need to look into a different method.

jtbr commented 5 months ago

Ok, thanks for your help