Closed bing-jian closed 4 years ago
Some solvers currently do not need both information at the same time, eg. BFGS. But I agree, that a combining the computation will help some solvers and the linesearch methods would benefit from shared computation. The first idea would be to extend the “problem” interface by adding the method “value_gradient” which shares the computation. This way the solvers/linesearches can decide if they need both information and the default can just call the “value” and “gradient” method without breaking changes.
@bing-jian @PatWie any plan to implement the shared computation yet?
v2 has now an eval function to compute the entire state
Hi Patrick,
First thanks a lot for creating this library. I am switching from vnl to Eigen and looking for a nonlinear optimization library based on Eigen that can replace the https://public.kitware.com/vxl/doc/release/core/vnl/html/classvnl__cost__function.html . The cppoptlib::Problem class seems can fill this role. However, I have a question when trying to migrate some functions from vnl_cost_function to cppoptlib::Problem. For many functions, the evaluation and calculating derivatives actually share some common computation. I am wondering if there is a way to share computation between value() and gradient(). For example if it is always the case that the value() function is called before gradient() or the other way around, we can save some intermediate results in the function being called first and then use them later in another function. This could save some unnecessary computational cost especially when both value() and gradient() are computationally heavy if running independently.
I hope the problem described above is clear and thanks for your attention!
-- Bing