I noticed that the default convergence check for FirstOrderMinimizer does not use absolute value of the function, when doing a check for relative gradient. I think when minimizing functions that yield negative values at the current considered point, the tolerance for gradient norm always falls back to 1E-8.
Fixed to use the absolute value of the optimized function. Could you please check if this could be merged?
EDIT: Seems the CI pipeline doesn't pass, but the tests passed locally for me.
I noticed that the default convergence check for
FirstOrderMinimizer
does not use absolute value of the function, when doing a check for relative gradient. I think when minimizing functions that yield negative values at the current considered point, the tolerance for gradient norm always falls back to1E-8
.Fixed to use the absolute value of the optimized function. Could you please check if this could be merged?
EDIT: Seems the CI pipeline doesn't pass, but the tests passed locally for me.