Closed agoscinski closed 1 year ago
I would like to merge EquistoreAutograd into RascalineAutograd, since the backward functions are identical.
Yes, that was my idea: refactor the code to have only one custom torch::autograd::Function
. I'll finish up #200 first though, this is less urgent IMO.
This is now in #200!
Implements the function
register_autograd
into the rascaline_torch Calculators that allows to recreate the autograd graph without recomputing the features. This is useful for training over multiple epochs where the gradients can be precomputed and reused for all epochs.TODOs: I would like to merge EquistoreAutograd into RascalineAutograd, since the backward functions are identical. I would use a check on the nullptr of the tensor_map to determine if it was called from
register_autograd
or fromcompute
.:books: Documentation preview :books:: https://rascaline--205.org.readthedocs.build/en/205/