JuliaSmoothOptimizers / OptimizationProblems.jl

Optimization Problems for Julia
Other
88 stars 48 forks source link

add `ADNLSModel` constructor in `arglina` #200

Closed tmigot closed 1 year ago

tmigot commented 2 years ago

This is an example of how we could use ADNLSModel for the least square objective.

Currently, the tests break because in the JuMP models currently implemented we generally don't have the 1/2 factor in front of the objective. #162

@abelsiqueira @dpo Any opinion on this?

codecov[bot] commented 2 years ago

Codecov Report

Patch coverage: 100.00% and no project coverage change.

Comparison is base (040a34c) 99.83% compared to head (897c4b7) 99.83%.

:exclamation: Current head 897c4b7 differs from pull request most recent head 8cdf744. Consider uploading reports for the commit 8cdf744 to get more accurate results

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #200 +/- ## ======================================= Coverage 99.83% 99.83% ======================================= Files 793 793 Lines 7260 7275 +15 ======================================= + Hits 7248 7263 +15 Misses 12 12 ``` | [Impacted Files](https://codecov.io/gh/JuliaSmoothOptimizers/OptimizationProblems.jl/pull/200?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers) | Coverage Δ | | |---|---|---| | [src/PureJuMP/arglina.jl](https://codecov.io/gh/JuliaSmoothOptimizers/OptimizationProblems.jl/pull/200?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL1B1cmVKdU1QL2FyZ2xpbmEuamw=) | `100.00% <ø> (ø)` | | | [src/ADNLPProblems/arglina.jl](https://codecov.io/gh/JuliaSmoothOptimizers/OptimizationProblems.jl/pull/200?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL0FETkxQUHJvYmxlbXMvYXJnbGluYS5qbA==) | `100.00% <100.00%> (ø)` | | | [src/Meta/arglina.jl](https://codecov.io/gh/JuliaSmoothOptimizers/OptimizationProblems.jl/pull/200?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL01ldGEvYXJnbGluYS5qbA==) | `100.00% <100.00%> (ø)` | | Help us with your feedback. Take ten seconds to tell us [how you rate us](https://about.codecov.io/nps?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers). Have a feature suggestion? [Share it here.](https://app.codecov.io/gh/feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers)

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.

tmigot commented 2 years ago

@dpo Here is an example of the benefit of having both in-place and out-place residual with the same name, so that we can change the backend effortlessly.

using ADNLPModels, NLPModels, ReverseDiff
function arglina(; n::Int = default_nvar, type::Val{T} = Val(Float64), kwargs...) where {T}
  function F(r, x)
    m = 2 * n
    for i=1:n
      r[i] = x[i] - T(2 / m) * sum(x[j] for j = 1:n) - 1
      r[i + n] = -T(2 / m) * sum(x[j] for j = 1:n) - 1
    end
    return r
  end
  function F(x)
    r = similar(x, 2 * n)
    return F(r, x)
  end
  x0 = ones(T, n)
  return ADNLPModels.ADNLSModel(F, x0, 2 * n, name = "arglina"; kwargs...)
end

nlp = arglina(n = 10)
F = nlp.F
output = typeof(nlp.meta.x0)(undef, nlp.nls_meta.nequ)
input = nlp.meta.x0

# Use ForwardDiff on x -> nlp.F(x)
jac_residual(nlp, input)
@show @allocated jac_residual(nlp, input) # 6528

# Use ReverseDiff on (r, x) -> nlp.F(r, x)
cfJ = ReverseDiff.JacobianTape(nlp.F, output, input)
ReverseDiff.jacobian!(cfJ, input)
@show @allocated ReverseDiff.jacobian!(cfJ, input) # 1808 !!

# Use ReverseDiff on (r, x) -> nlp.F(r, x) and pre-allocate the result
result = zeros(20, 10)
ReverseDiff.jacobian!(result, cfJ, input)
@show @allocated ReverseDiff.jacobian!(result, cfJ, input) # 0
dpo commented 1 year ago

Hi @tmigot. This is quite old and I forget what you were trying to explain. I don't see anything in the example above that would be complicated if the in-place function were called F!. What am I missing?

tmigot commented 1 year ago

The issue is that the function returns

> ADNLPModels.ADNLSModel(F, x0, 2 * n, name = "arglina"; kwargs...)

so that wouldn't work if we have an F and an F!.

tmigot commented 1 year ago

I wouldn't merge it anyway because right now calling obj for a NLS allocates, while we were trying to make some efforts along this direction in here https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl/pull/241