Closed cossio closed 4 years ago
Funny should should open this, as another user has also just made me aware of this. I'll have to think about it. But it'll be fixed.
Just to clarify, would this allow for a TwiceDifferentiable(f, g!, gf! = fg!)
like syntax to allow for autodiff for the Hessian and providing an efficient fg!
method?
I think that should be the job of the constructor of a twicedifferentiable that takes in a oncedifferentiable. I have a new version of optim in the works and this part will be changed quite a bit, so I’m sort of only patching here.
Got it. Looking forward to the new design. 👍
Got it. Looking forward to the new design. +1
You can be one of the lucky beta testers... I'll provide you with many internet points! (though they'll be myspace add credits...)
this seems like a problem in NLSolversBase
Sorry everyone! Kept slipping out of my mind. Should work now.
Thanks so much! I'll give it a whirl soon.
I can use
Optim.only_fg!
with a gradient-free method:This can be convenient, in the sense that you only have to write
fg!
and then your code works for all gradient free and gradient required algorithms, which can be useful e.g. for benchmarking purposes. As stated in the docs, the gradient free algorithm can callfg!
withG === nothing
to request the function value only and not the gradient.However this behavior is broken for higher derivatives. Here
NelderMead()
does not need gradient nor hessian, andonly_fgh!
fails:Instead I'd expect
NelderMead
to callfgh!
with bothG===nothing
andH===nothing
, to request the function value only but not the gradient nor the Hessian.Similarly,
LBFGS()
needs the gradient but not the Hessian, and againonly_fgh!
fails:In this case I'd expect
LBFGS
to callfgh!
withH===nothing
, to request the function value and the gradient but not the Hessian.