ACEsuit / Polynomials4ML.jl

Polynomials for ML: fast evaluation, batching, differentiation
MIT License
12 stars 5 forks source link

Spherical bessel functions #8

Open cortner opened 2 years ago

cortner commented 2 years ago

... to be able to construct laplacian eigenfunctions.

CC @wcwitt

JPDarby commented 1 year ago

I'm keen to try these out and happy to help implement. Any pointers on where to start?

JPDarby commented 1 year ago

Also, is the idea that ACE.jl and ACE1.jl will eventually import radial functions from here?

cortner commented 1 year ago

Thanks, James! I think step one would be to look up existing bessel function implementations, see which ones will work for us and wrap one of them. For speed we can just interpolate onto splines later, or re-implement ourselves with a recursive methods. I would start with SpecialFunctions.jl and Bessels.jl, but there may be others.

I believe either @wcwitt or @CheukHinHoJerry have started thinking about this? If you have can you please report here?

I seem to remember one needs to find the right kind of linear combination to satisfy the boundary conditions. You want the radials to be zero and have zero-slope at the cutoff.

JPDarby commented 1 year ago

The Laplacian eigen-functions (LE) are zero at the cutoff but don't have zero slope. They are just spherical bessel functions scaled so that the nth zero is at the cutoff

cortner commented 1 year ago

Right I remember now - because of that we said we should use bilaplace eigenfunctions.

cortner commented 1 year ago

If you use scaled spherical Bessel then how would you obtain the zero slope condition? This is needed to have continuous forces.

JPDarby commented 1 year ago

What are "bilaplace eigenfunctions"?

If you use scaled spherical Bessel then how would you obtain the zero slope condition? This is needed to have continuous forces.

Multiplying by a smooth cutoff function? I thought this is how it's done at the moment when using things like Chebyshev polynomials

wcwitt commented 1 year ago

I've been talking to @JPDarby separately, but one thing to consider is that the LE basis is l-dependent. And therefore a bit difficult with ACE1 if I understand correctly? That's why I haven't done this already, but maybe there is a solution now.

I seem to remember one needs to find the right kind of linear combination to satisfy the boundary conditions.

I think this is for when the domain is the annulus. You need a linear combination of type-1 and type-2 or whatever.

cortner commented 1 year ago

I think there is a lot of confusion going on here / mixup of different conversations ... give me a minute and I'll summarise what I think should be done.

cortner commented 1 year ago

What are "bilaplace eigenfunctions"?

eigenfunctions of the bilaplace operator = laplace^2

Multiplying by a smooth cutoff function? I thought this is how it's done at the moment when using things like Chebyshev polynomials

then you destroy the eigenfunction property that you seem to be so keen on. In ACE1, this has a specific interpretation but that interpretation is inconistent with your goals of having the one-particle basis eigenfunctions of the laplacian in real-space coordinates. If I misunderstood this then maybe it's time to write down a precise specification of what you want.

I seem to remember one needs to find the right kind of linear combination to satisfy the boundary conditions.

I think this is for when the domain is the annulus. You need a linear combination of type-1 and type-2 or whatever.

thank you - that sounds right.

cortner commented 1 year ago

I've been talking to @JPDarby separately, but one thing to consider is that the LE basis is l-dependent. And therefore a bit difficult with ACE1 if I understand correctly? That's why I haven't done this already, but maybe there is a solution now.

it is not difficult but time-consuming and I would have to do it myself. I will only do this if I have a very strong motivation for it.

wcwitt commented 1 year ago

I found some brief notes from the last time I thought about this - no promises they are 100% correct. I also pointed James towards a Slack thread we had.

Laplacian_eigenfunctions-pages-1.pdf

cortner commented 1 year ago

Plan :

cortner commented 1 year ago

Let me also explain why I'm very sceptical about the need for laplacian eigenfunctions. We very much want to move towards trainable or pretrained radials. (Drautz and Shapeev get away with much smaller basis sets and this may be a big part of the reason.) At that point the starting basis is irrelevant. Instead we just want to orthogonalize the trained basis w.r.t. to the correct inner product to then be used as in a linear model. @dhan-02 is working on that. Once we have a prototype we wanted to loop in Gabor's group.

wcwitt commented 1 year ago

If you use scaled spherical Bessel then how would you obtain the zero slope condition?

I'd need to double check, but I think this is fairly straightforward to achieve because it is essentially a linear constraint. You can start with the full basis, use the constraint to reduce the basis size by one, and then re-orthogonalize. Pseudopotential people do it this way.

JPDarby commented 1 year ago

eigenfunctions of the bilaplace operator = laplace^2

Thanks.

Multiplying by a smooth cutoff function? I thought this is how it's done at the moment when using things like Chebyshev polynomials

then you destroy the eigenfunction property that you seem to be so keen on. In ACE1, this has a specific interpretation but that interpretation is inconistent with your goals of having the one-particle basis eigenfunctions of the laplacian in real-space coordinates. If I misunderstood this then maybe it's time to write down a precise specification of what you want.

Ok I see cutoff goes into the basis functions here. I was thinking about it in a SOAP style way where the cutoff multiplies a density which is expanded in fixed basis functions.

cortner commented 1 year ago

Then to mimic soap exactly that's what you should do of course. And it would also be the easiest case.

JPDarby commented 1 year ago

I've started a very preliminary version of this here

I don't think there's any advantage in using a recursion formula because R_nl(r) = j_l(r/z_nl), where the z_nl are all different, so the j_l get evaluated at different places.

cortner commented 1 year ago

I see. makes sense.