Closed CheukHinHoJerry closed 1 year ago
The states also need to get passed in. If not the states then the temporary array cache. Yes, maybe that's better actually.
What's the status of this PR?
I think we talked about this today. I will try to do what we have discussed for trainable basis within this week. I am doing this pr together with https://github.com/ACEsuit/Polynomials4ML.jl/issues/45#issue-1765806905. Basically updates includes:
evaluate!(P, basis, x, zeta, pool)
interface and allow the use of tmparray
in lux layers for non-allocating lux layer, which is to use tmp array in the middle layers and possible an FlexArrayCache
in the last layer or we can do a copy.For forwarddiff on zeta, I suggest not to include this here since it would be something that we have to test and play around with for a while to see how it actually goes?
@cortner This is ready to be merged.
The only thing I don't really like about this PR is that it mixes a few unrelated issues - e.g. the non-allocating interface could have been treated in a separate PR. I don't think it is worth trying to separate them out now, that would be too much work. But in future, let's please try and make smaller PRs that solve discrete problems and are easier to review.
Thank you for very review and comment. I will make sure that each PR only take care of a small and related issues in the future.
Does this PR need any new docs?
Is this a patch version?
Re Docs:
I think I need to write a documentation for the lux layer and interfaces on pullbacks. Maybe it is also nice to have a small hierarchy diagram to explain what is ScalarP4ML
etc...
Should I do this right away?
Re patch version : Yes I think so.
I think I need to write a documentation for the lux layer and interfaces on pullbacks. Maybe it is also nice to have a small hierarchy diagram to explain what is ScalarP4ML etc...
At this point, I don't think this is user-facing yet. Let's do this once we finalize the pullback interfaces and make those implementations backward compatible.
I tested on 1.9 and 1.10 and will merge and register.
I've submitted this to General
as version 0.1.5
Thank you very much for your review.
this is now registered.
This PR mainly fixes some bugs and implements the AORlayer for trainable $\zeta$ in Gaussian basis. The problem of matrix multiplication with
PtrArray{Hyper}
is resolved fromOctavian.jl
's end.Remainder on we can actually do something like this:
Maybe we can leave this PR open and use the dev branch to implement our code in
ACEpsi.jl
until we have a nice clean up with the Lux interface?CC @cortner @DexuanZhou