JuliaSmoothOptimizers / SolverTools.jl

Tools for developing nonlinear optimization solvers.
Other
26 stars 18 forks source link

trunk variant for NLS problems #55

Closed dpo closed 5 years ago

dpo commented 6 years ago

@abelsiqueira There's quite a bit of repetition with trunk here. Do you see a more elegant way that would avoid redundancy?

abelsiqueira commented 6 years ago

I haven't looked at it in details, but it seems that the main difference is the step using the Hessian approximation of Gauss-Newton solved by least squares cg. The objective function and gradients could be computed at the same time using objgrad!, so it would also avoid unnecessary computations. So maybe an if isa(nlp, AbstractNLSModel) could work on that step?

dpo commented 6 years ago

Would that be type stable? By the way, it would be great to do the same with TRON.

dpo commented 6 years ago

This requires a new release of NLPModels.

abelsiqueira commented 6 years ago

Not sure about the type stability. About the TRON LS version, I'll try to take a look soon.

dpo commented 6 years ago

We need to think this through because ideally, we should be able to pass in a solver for the subproblem. I'd like to know of good ways to do it. In trunkls, I'm using cgls, but we could pass other solvers that have the right properties. In trunk, tron, tronls, and other solvers, the same situation will happen.

dpo commented 6 years ago

Something like https://github.com/mauro3/SimpleTraits.jl perhaps.

abelsiqueira commented 6 years ago

Revisiting my answer, I think just objgrad won't do. Probably only memoization will prevent full code redundancy. On the traits, I haven't quite grasped the concept yet. Is it to establish which subproblem solvers work with each type of problem? Like cgls for NLS, etc.?

abelsiqueira commented 6 years ago

It seems memoization is not so difficult to setup, except for the inplace methods. Related: https://github.com/simonster/Memoize.jl/issues/23

dpo commented 6 years ago

Where do you think we're performing redundant evaluations here? I was talking about redundant code structure.

Traits could help pass a solver for the trust-region subproblem as argument. For example, we could state that certain solvers are able to solve TR subproblems using something like

@traitdef CanSolveTRSubproblem{LinearSolver} <- can_solve_tr_subproblem(LinearSolver)

If cg and cgls have a type that derives from LinearSolver, we could say

can_solve_tr_subproblem(CGType) = true
can_solve_tr_subproblem(CGLSType) = true
can_solve_tr_subproblem(GMRESType) = false

Then, there is a mechanism to write the trunk function to state that the subproblem solver argument must have that trait. Something like

@traitfn function trunk{LinearSolver; CanSolveTRSubproblem{LinearSolver}}(model::AbstractNLPModel, subsolver::LinearSolver)
  ...
end

At least, that's my current understanding of traits.

abelsiqueira commented 6 years ago

Sorry, I meant that if we have memoization we can avoid creating additional code by simply using the general version. For instance,

f = obj(nlp, x)
g = grad(nlp, x)

for nlp being a NLS we have

function obj(nls, x)
  r = residual(nls, x)
  return 0.5 * dot(r, r)
end
function grad(nls, x)
  r = residual(nls, x) # memoized
  return jtprod(nls, x, r)
end

I think I understand traits now, in theory at least.

dpo commented 6 years ago

Yes, memoization would definitely help in other places too. Note that currently, trunkls evaluates the residual and the Jacobian as an operator, so I don't think it performs redundant evaluations, does it?

abelsiqueira commented 6 years ago

No, I don't think it does.

dpo commented 6 years ago

The test on 0.7 fail because for some reason we use MathProgBase 0.6.4 instead of 0.7.0. I don't see that a dependency is forcing that version of MPB though, do you?

abelsiqueira commented 6 years ago

Nothing forcing 0.6.4, as far as I can tell.

coveralls commented 6 years ago

Coverage Status

Coverage decreased (-12.5%) to 61.919% when pulling 4320b32234a93ace8ee7c874556f67d2a486d487 on trunkls into 8132d6578592a8298c5ae1684f4cbf8bc87e9514 on master.

coveralls commented 6 years ago

Coverage Status

Coverage decreased (-14.5%) to 56.704% when pulling 9f79d8680a5b87301bf7b922cf9096dff0c8b8fd on trunkls into 8e2968d08772476c1ad60df38b08cbcf53c3f020 on master.

dpo commented 5 years ago

Rebased.

@abelsiqueira The tests are going to fail with the message

Simple test: Error During Test at /Users/dpo/dev/julia/JSO/Optimize.jl/test/solvers/trunkls.jl:4
  Got exception outside of a @test
  UndefVarError: v not defined
  Stacktrace:
   [1] statsgetfield(::GenericExecutionStats, ::Symbol) at /Users/dpo/dev/julia/JSO/Optimize.jl/src/stats/stats.jl:125

Will that be fixed by #77 ?

abelsiqueira commented 5 years ago

It appears to be the case. You found a bug in line 125 of stats, where it should say Unknown field $name. So it's supposed to be one of the neval_ for residuals.

dpo commented 5 years ago

Fix it here or in #77?

abelsiqueira commented 5 years ago

Here, because we can

dpo commented 5 years ago

Seems ok. Now we have

Simple test: Error During Test at /home/travis/build/JuliaSmoothOptimizers/Optimize.jl/test/solvers/trunkls.jl:4
  Got exception outside of a @test
  Unknown field neval_residual
  Stacktrace:
   [1] error(::String) at ./error.jl:33
   [2] statsgetfield(::GenericExecutionStats, ::Symbol) at /home/travis/build/JuliaSmoothOptimizers/Optimize.jl/src/stats/stats.jl:125
abelsiqueira commented 5 years ago

Then we can merge #77.

abelsiqueira commented 5 years ago

Thanks