emmt / OptimPackNextGen.jl

An almost pure Julia version of OptimPack for numerical optimization with particular focus on large scale problems
Other
12 stars 6 forks source link

Bound constraints vs. auxiliary function #8

Open RainerHeintzmann opened 3 years ago

RainerHeintzmann commented 3 years ago

I played around a bit to test the vmlmb method for DeconvOptim.jl. It worked nicely, but I did not really find any advantage compared to the standard LBFGS implementation provided by Opim.jl. Interestingly we implemented the positivity constraint using a square operation as an auxiliary function rather than bounding the optimizer. If I compare the two approaches, the standard LBFGS with the square auxiliary function converges much quicker than vmlmb together with "mem=20, lower=0". E.g. after 16 iterations LBFGS has already a lower loss value than vmlmb with the bound has after 26 iterations. Yet if I test it with the same auxiliary square function the performance is quite similar. Any idea how this could be improved, or is the auxiliary function always a better choice?

emmt commented 3 years ago

It is nice to have some feedback on VMLMB.

From what you write, I am assuming that, to ensure the positivity of the variables, say x, you optimize your objective function via auxiliary variables, say u, such that x[i] = u[i]^2 (for all indices i). Then using auxiliary variables is not always a good choice because it makes your problem non-convex whatever the original objective function. At the bounds (where u[i]=0), the gradient of the objective function with respect the auxiliary variables is exactly equal to zero which may prevent moving away from the bounds. If you do not initialize your variables with zero, it is numerically unlikely to occur, but having a non-convex and more non-linear objective function is generally not harmless for optimization. For me, it is thus rather fortunate that the convergence be improved by the change of variables. This is certainly problem dependent. I'm not questioning your results but it's funny that I wrote VMLM-B (a long time ago) precisely to avoid using the nonlinear change of variables to impose the positivity of the variables ...

I have a few questions/remarks/suggestions:

RainerHeintzmann commented 3 years ago

Thanks a lot for your detailed explanation! Yes, your assumptions are all correct. In the square case, we do not really care which solution it finally finds, but of course you are right with the potential zero-gradient in between. However, it is unstable, so practically it does not matter as long as we do not initialize with zero. Looking at a range of different auxiliary functions (which I call "pre-forward-model"), one sees also a range of different convergence rates. You are right and I should double-check the gradients to be correct, yet I observe faster (not slower) convergence compared to the version without auxiliary functions, which indicates that the gradient is probably OK. The problem is not top secret at all. Its all in DeconvOptim.jl. The number of variables are in the millions and the objective function is for example the i-divergence of a linear forward problem (convolution of unknown object with known point spread function). I think for proper convergence, we often need hundreds of iterations, no matter whether its L-BFGS or VMLB. I have to double-check, but I think that L-BFGS-B is not in Optim.jl.