Closed abelsiqueira closed 6 years ago
This fails 0.4 because I used the new sum
syntax. Maybe we could drop 0.4 now. NLPModels already dropped, and 0.6 is scheduled for the end of February.
@dpo, I've changed this to use status = :first_order
on the solver. ExecutionStats
has status as required argument, which must be one of :unknown, :first_order, :max_eval, :max_time, :neg_pred, :exception
. Printing the stats
, or using getStatus(stats)
shows a useful message instead of the key.
This should be ready for review again.
I persist in requesting max_iter. I know the arguments against, but why not offering a user the possibility, defaulting to infinity iterations? For instance, a QP solver could well use internally c and Q defining the objective, and iterate until some precision is reached. No evaluations here. A max_iter would trap failures caused by roundoff preventing to reach the prescribed accuracy...
I agree with @vepiteski, defaulting max_iter
to infinity should not cause any harm to efficiency.
I think I'm gonna refactor this now.
I have a hard time thinking of a situation where there wouldn't be a more meaningful measure than the number of iterations. In the QP example above, the number of products with Q, or number of factorizations of (a submatrix of) Q would be far more meaningful. Different algorithms perform different work during one iteration, making comparisons difficult, or worse, misleading.
We could add something related then, thinking about this case. Since ExecutionStats should be the only output on our algorithms, if a dev/user wants to create a quadratic solver measuring matrix products, then there should be a field in ExecutionStats for that (or related). Similar, but different, could be creating a new NLPModel for Quadratic Problems, and creating a new field there, though it loses the generality.
The matvecs are already in the counters (those should be part of ExecutionStats). Each solver could add its own stats, which could be a solver-dependent Dict, for things like number of factorizations. The number of iterations could be in there, but I don't think it should be used:
Will add solver-specific dict.
There's an error on OSX linked to gsl1
. Perhaps you need to rebase this branch?
I think this needs a new CUTEst release, actually. Can I create CUTEst 0.2.0?
Sure.
I'll rebase this after #38.
I am in the process of testing line search algorithms. I compare the results of descent algorithms (DA) (conjugate gradient, L-BFGS, Newton) when using different linesearch routines LS. Therefore, I need that the solver outputs evaluation counts more finely grained: f is the objective, h(t)=f(x+t*d), I currently output the number of evaluations f_obj, f_grad, f_Hess, f_HV and h_obj, h_grad, h_hess to be able to know if a specific linesearch is better by computing the stepsize more cheaply (less h_evaluations) or if it is because the "quality" of the stepsize is better resulting in less f_evaluations in the DA.
Should LineFunctions
have counters associated to them, then?
Le 2017-06-13 à 10:03, Dominique a écrit :
Should |LineFunctions| have counters associated to them, then?
That's what we have implemented. For now, we use a local copy of LineFunctions with the counters.
JPD
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/JuliaSmoothOptimizers/Optimize.jl/pull/35#issuecomment-308126206, or mute the thread https://github.com/notifications/unsubscribe-auth/AHy_GxEL5n255M6OJSJwbOD1FbgrgCMsks5sDpawgaJpZM4Lde0M.
You could create a subproblem NLPModels
and use its counters.
LSnlp = SimpleNLPModel(t->obj(nlp, x + t*d), zeros(2), g=...)
Maybe change LineFunction
to be an NLPModel
?
Maybe this is a better solution than LineFunctions. It would also allow for bi or multi parameter searches in the vein of x+td_1+sd_2.
From what I can see the difference is pretty much handling Float64
s instead of Array
s.
Consider a descent algo DA which uses a line search LS. How costly is it to instantiate a new LineFunction or SimpleNLPModel (either called a LSModel) at each iteration requiring a LS computation? Would it not be better to instantiate the LSModel, and then at the appropriate place (at each iteration of the DA), have a function LSModel.set!(x,d) change the current x and d?
Do you mean to close thisp pull request?
Maybe I'm out of topic for this pull request. My concern is cleanly allowing reporting evaluation counts from line searches separately than the main algorithm, which brings me to question the whole line search tools.
@abelsiqueira I forget where this stands.
This should be ready to merge after rebase.
Coverage decreased (-3.2%) to 65.395% when pulling 5eade8902e16d6154964e731c72a0036c35d4b5c on abelsiqueira:feat/stats into 30dac885fc5f7096d45011758c3d52dbe2e7df68 on JuliaSmoothOptimizers:master.