Closed cseveren closed 2 months ago
Because this may not be an Optim
issue, I also posted here:
https://discourse.julialang.org/t/gradient-norm-does-not-change-in-optim-using-autodiff/94215
No progress would be my bet. If you can provide more information I can reopen :)
What is signified by the output
Gradient norm
being constant (stuck, not changing) across iterations when usingNewtonTrustRegion
. As an example, below is the output from the first two iterations of a minimization problem run usingNewtonTrustRegion
. My initial point is the output of a prior round of optimization, that also got stuck at this sameFunction value
andGradient norm
.Some more detail: If I switch to
LBFGS
the optimization successfully continues (Function value
decreases), but of course gradient methods are slow, so it would be ideal to switch back toNewtonTrustRegion
. Even if I letLBFGS
run for a while so as to find a moderately different candidate minimizer, when I switch back toNewtonTrustRegion
this same behavior of being stuck with constantGradient norm
re-emerges.I would provide code but it is complicated and has a lot of data and it only occurs in some of the models I've run -- I'm really just hoping to get some intuition as to options/tuning parameters to adjust to bounce out of difficult spots. I have already tried
allow_f_increases=true
; that did not solve the issue.Excellent package, many thanks.