dftlibs / xcfun

XCFun: A library of exchange-correlation functionals with arbitrary-order derivatives
https://dftlibs.org/xcfun/
Mozilla Public License 2.0
57 stars 32 forks source link

SCAN functionals return NaN values for small density gradients #144

Closed fishjojo closed 3 years ago

fishjojo commented 3 years ago

The following code returns either NaN for functional derivatives or crashes with error void pow_expand(T*, T, T) [with T = double; int N = 2]: Assertion `x0 > 0 && "pow(x,a) not real analytic at x <= 0"' failed.

int main()
{
  double* rho = malloc(sizeof(double) * 3);
  rho[0] = 1.56340410e-09;
  rho[1] = 5.24324068e-18;
  rho[2] = 4.19259161e-10;

  int fn_id = 46;
  xcfun_t* fun = xcfun_new();
  const char* name = xcfun_enumerate_parameters(fn_id);
  xcfun_set(fun, name, 1.0);

  xcfun_eval_setup(fun, XC_N_GNN_TAUN, XC_PARTIAL_DERIVATIVES, 1);
  int outlen = xcfun_output_length(fun);
  double* output = malloc(sizeof(double) * outlen);

  xcfun_eval(fun, rho, output);
  xcfun_delete(fun);

  for(int i=0; i<outlen; i++){
    printf("%f\n", output[i]);
  }
}

I think this is caused by the following lines of code https://github.com/dftlibs/xcfun/blob/65c1ea46a2258d01ef8c680bc2d306a93ab56da7/src/functionals/SCAN_like_eps.hpp#L118-L124 where s is always 0 when d_g is smaller than 1e-16.

bast commented 3 years ago

Thanks for reporting this and for locating the problem!

robertodr commented 3 years ago

@JFurness1 @bast do you have an idea how to fix this directly in the XCFun source code?

JFurness1 commented 3 years ago

I'll take a look at this now. I've probably made a poor choice for regularising the gradient somewhere along the way.

JFurness1 commented 3 years ago

You're correct that setting s to 0 is causing a problem, specifically on line

  num gx = 1.0 - exp(-A1 / pow(p, 1.0 / 4.0));

where p = s*s and the problem is obvious.

I never saw this because Turbomole regularises the gradient to a minimum value (1e-16) before calling XCFun, so this bug never occurred.

I've matched this regularisation in my local branch by calculating the reduced gradient with 1e-16 if g < 1e-16, rather than zero. This prevents the problem and brings it into line with the Turbomole implementation. One could do this in the correlation as well, but there isn't the same singularity wrt s, so there isn't such a need.

I also cleaned it up a bit by calculating p directly, rather than s then immediately squaring it, this avoids taking the square root of the gradient. I have a very faint memory of there being a reason we calculated s then squared it back when I originally implemented SCAN in 2015, but I don't remember if it was a good reason.

@robertodr and @bast , how would you like me to commit this. Open another pull request, or am I OK to push the changes to master directly?

robertodr commented 3 years ago

That was amazingly quick James! Please open another pull request with the patch. Thanks again!

JFurness1 commented 3 years ago

No problem, it was a pretty glaring error I should have seen before really! I'm surprised it got this far without showing up.

I've opened the pull request. Hopefully I go that all correct, I'm still finding my feet around git's wider fork/pull request features.

robertodr commented 3 years ago

@fishjojo would you be able to try if latest master fixes the problems you observed in PySCF? Thank you.

fishjojo commented 3 years ago

Yes, this fixes the problem. Thanks.

JFurness1 commented 3 years ago

Great! I'm glad it was an easy fix!

bast commented 3 years ago

Thanks so much James and everybody! Sorry for answering late. Meetings all the time :-)

robertodr commented 3 years ago

Excellent! I'll prepare a new release in the morning, after updating the changelog.

On Wed, Nov 11, 2020, 22:16 Radovan Bast notifications@github.com wrote:

Thanks so much James and everybody! Sorry for answering late. Meetings all the time :-)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dftlibs/xcfun/issues/144#issuecomment-725665745, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA4JOEMV4ZKEVFPJ3PSCUUTSPL5LJANCNFSM4R3MEOJA .