Closed hoelzerC closed 6 months ago
Now works with getters as in ASE (documented in "Getting started"). With caching, the energy calculation does not trigger an additional calculation after the forces calculations, as indicated by the counter of calculations ran.
import torch
import dxtb
dd = {"dtype": torch.double, "device": torch.device("cpu")}
numbers = torch.tensor([3, 1], device=dd["device"])
positions = torch.tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]], **dd)
positions.requires_grad_(True)
calc = dxtb.calculators.GFN1Calculator(numbers, opts={"cache_enabled": True}, **dd)
forces = calc.get_forces(positions)
print(calc._ncalcs)
energy = calc.get_energy(positions)
print(calc._ncalcs)
Arguably the main functionality of
dxtb
will be to calculateTo better grasp the design choices behind the new interface (so far docs are not online), I am wondering about the design choices. What is the intended way to calculate energies and forces?
Is that how it should be done? Ideally, a new user will intuitively do this right. Imo in the upper approach
calc.forces_analytical
should not require a-1
, as per definitionF = - nabla(U)
. Also, I am wondering about the content of theresults
object the keysgradient
andtotal_grad
for instance are not relevant anymore, or are they? And which key unambiguously returns the (total) energy as required by most users?Suggestions
calc.get_energy(...)
which under the hood returnscalc.singlepoint(...).total.detach().sum()
calc.get_force(...)
which under the hood returns-calc.forces_XXX(...)
(wherebyXXX
can beanalytical
per default)To me, this feels like a more natural approach. Happy to discuss.