Luthaf / rascaline

Computing representations for atomistic machine learning
https://luthaf.fr/rascaline/
BSD 3-Clause "New" or "Revised" License
44 stars 13 forks source link

Implementation of register_autograd to recreate autograd graph without recomputation #205

Closed agoscinski closed 1 year ago

agoscinski commented 1 year ago

Implements the function register_autograd into the rascaline_torch Calculators that allows to recreate the autograd graph without recomputing the features. This is useful for training over multiple epochs where the gradients can be precomputed and reused for all epochs.

TODOs: I would like to merge EquistoreAutograd into RascalineAutograd, since the backward functions are identical. I would use a check on the nullptr of the tensor_map to determine if it was called from register_autograd or from compute.


:books: Documentation preview :books:: https://rascaline--205.org.readthedocs.build/en/205/

Luthaf commented 1 year ago

I would like to merge EquistoreAutograd into RascalineAutograd, since the backward functions are identical.

Yes, that was my idea: refactor the code to have only one custom torch::autograd::Function. I'll finish up #200 first though, this is less urgent IMO.

Luthaf commented 1 year ago

This is now in #200!