IntegralEquations / HMatrices.jl

A Julia library for hierarchical matrices
MIT License
37 stars 3 forks source link

Assembling of Helmholtz kernel #25

Closed mipals closed 1 year ago

mipals commented 1 year ago

Hi Luiz,

I am playing around with this package for Helmholtz problems. Do you recommend using the assemble_hmat from this package of the assemble_ifgf from your IFGF.jl package?

Cheers, Mikkel

mipals commented 1 year ago

Furthermore, how would one go about creating a H-matrix for the double-layer potential? I seem to only be able to pass a constant to my kernel.

maltezfaria commented 1 year ago

Hi Luiz,

I am playing around with this package for Helmholtz problems. Do you recommend using the assemble_hmat from this package of the assemble_ifgf from your IFGF.jl package?

Cheers, Mikkel

Hey Mikkel,

The algorithms in HMatrices.jl and IFGF.jl are quite different, and thus my final recommendation regarding which to use would depend on some fine details of your problem (e.g. are you in the high-frequency regime, is RAM a bottle-neck, is the problem well-conditioned, etc).

Overall, however, I think HMatrices.jl is a better starting point. The code is more mature, and has more functionality (you can, for instance, perform a hierarchical LU if you problem is not well conditioned). The IFGF package should still be considered an experimental research project 😅

maltezfaria commented 1 year ago

Furthermore, how would one go about creating a H-matrix for the double-layer potential? I seem to only be able to pass a constant to my kernel.

I will post a MWE later today! The docs are definitely missing this example.

mipals commented 1 year ago

Hi again,

I guess I could have guessed that answer. Most numerics is problem dependent 😃

I can definitely see that there is some extra memory used with the H-matrix (its the HGOperator) 😅

Screenshot 2023-02-18 at 16 38 58

But its also a little difficult to make a fair comparison due to them having different tolerances (IFGOps seem to only have a tol keyword whereas the HMatrix have both rtol and atol). The upside to the larger memory seem to be that the multiplication afterwards is then quite a bit faster.

My currently application can not, unfortunately, make use of the hierarchical LU as the underlying system of equations is not just the H-matrix but instead a collection of multiple (structured and sparse) matrices. For now I am just comparing switching the multiplication with the rank-structured matrices out from the Fast Multipole Method to H-matrices.

No stress regarding the MWE. I was just curious (but something that I would need for my application) 😄

maltezfaria commented 1 year ago

I added a simple example in the docs. Will work on polishing it later.

mipals commented 1 year ago

Perfect. Thanks!

maltezfaria commented 1 year ago

Hi again,

I guess I could have guessed that answer. Most numerics is problem dependent 😃

I can definitely see that there is some extra memory used with the H-matrix (its the HGOperator) 😅 Screenshot 2023-02-18 at 16 38 58 But its also a little difficult to make a fair comparison due to them having different tolerances (IFGOps seem to only have a tol keyword whereas the HMatrix have both rtol and atol). The upside to the larger memory seem to be that the multiplication afterwards is then quite a bit faster.

Nice! What FMM library are you using? Indeed, HMatrices tend to have a much larger precomputation/assembly stage (and much larger memory requirements), but are faster for the mat-vec product. And I agree, one would also have to compare the precision of the different methods...

My currently application can not, unfortunately, make use of the hierarchical LU as the underlying system of equations is not just the H-matrix but instead a collection of multiple (structured and sparse) matrices. For now I am just comparing switching the multiplication with the rank-structured matrices out from the Fast Multipole Method to H-matrices.

I see. Sometimes you can still factor the "dense" HMatrix block after a Schur complement...

No stress regarding the MWE. I was just curious (but something that I would need for my application) 😄

I hope it is clear how to do it now. Makes me wonder if I should not add these standard kernels to the package's API (potentially optimised to make use of SIMD instructions). I wanted to keep it very generic, but in practice most people care about three or four PDEs and the associated single and double layer kernels 🤔

mipals commented 1 year ago

Nice! What FMM library are you using? Indeed, HMatrices tend to have a much larger precomputation/assembly stage (and much larger memory requirements), but are faster for the mat-vec product. And I agree, one would also have to compare the precision of the different methods...

I use the Flatiron Institute Multipole Libraries as there already existed a (partial) Julia interface for 3D (FMM3D.jl). I then created one for 2D (FMM2D.jl - It only contains for Helmholtz problems as it was the what I needed, and the only thing that was actually covered in their manual), but I have not really tested it yet as I mainly deal with 3D. Regarding the memory then note that the memory footprint I showed is not particularly truthful for the FMM operator as it is only stores what is needed to call the FMM3D library (but when it is called it adds memory since it is actually performing the FMM summation).

I hope it is clear how to do it now. Makes me wonder if I should not add these standard kernels to the package's API (potentially optimised to make use of SIMD instructions). I wanted to keep it very generic, but in practice most people care about three or four PDEs and the associated single and double layer kernels 🤔

I think it was pretty clear 👍 Yeah, its always a little difficult to figure when to be generic and when to optimize. For now don't stress too much about optimizing for my sake. I am just poking around a little 😄