See the three examples from Nocedal's "Introductory lectures on convex optimization", page 41:
SR1 rank-one correction
DFP
BFGS
These requires storing in memory a matrix of size DxD, but they can be good baselines to compare with. May implement a generic "Variable metric" solver templatized by the update method (just like for CGD).
Bonus:
implement the test functions from page 56 (for geometric optimization and lp-norm approximation).
double check the implementation of the test functions that overflow (e.g. Zakharov): optimization fails for them when using a large number of dimensions (e.g. 1K+). Check if a numerically more robust implementation is possible.
double check the implementation of CGD methods, using the pseudo-code from page 45.
improve the speed of the test functions (they evaluation time should scale linearly with the number of dimensions).
more detailed line-search errors. Also investigate why the line-search methods fail so often for high dimensional test functions.
See the three examples from Nocedal's "Introductory lectures on convex optimization", page 41:
These requires storing in memory a matrix of size DxD, but they can be good baselines to compare with. May implement a generic "Variable metric" solver templatized by the update method (just like for CGD).
Bonus: