grimme-lab / dxtb

Efficient And Fully Differentiable Extended Tight-Binding
https://dxtb.readthedocs.io
Apache License 2.0
65 stars 11 forks source link

Overlap autograd requires high tolerances #109

Closed hoelzerC closed 1 year ago

hoelzerC commented 1 year ago

Gradcheck for mmd routine seems to work fine, however, the general setup of overlap requires large tolerances [1]. Furthermore, PyTorch Jacobian and numerical Jacobian do not match correspondingly [2].

See required tolerances in:

  1. https://github.com/grimme-lab/xtbML/blob/27f23fe6c6c2c0f7d51e29bf3fa5e563482b5431/test/test_overlap/test_grad.py#L220
  2. https://github.com/grimme-lab/xtbML/blob/27f23fe6c6c2c0f7d51e29bf3fa5e563482b5431/test/test_overlap/test_grad.py#L240

One possible approach is to test OS overlap scheme for a better autograd agreement.

marvinfriede commented 1 year ago

This is actually a bug, torch.autograd.gradcheck also fails for the McMurchie-Davidson algorithm for p-orbitals and higher. However, I am still not sure why this is happening.

Edit: The culprit was a torch.abs that messes with the gradients because it is undefined at 0.