The error occurs in functions/LBFGS.py, line 854.
I think this error comes from t. It becomes a double precision number with the process of convergence.
I tried
if(F_new > F_k + (c1*t*gtd).float()):
or
if(F_new.double() > F_k.double() + (c1*t*gtd)):
However, if I do so, the loss will not decrease any more.
The error occurs in functions/LBFGS.py, line 854. I think this error comes from t. It becomes a double precision number with the process of convergence. I tried
or
However, if I do so, the loss will not decrease any more.