Closed tmigot closed 1 year ago
Patch coverage: 100.00
% and no project coverage change.
Comparison is base (
040a34c
) 99.83% compared to head (897c4b7
) 99.83%.:exclamation: Current head 897c4b7 differs from pull request most recent head 8cdf744. Consider uploading reports for the commit 8cdf744 to get more accurate results
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
@dpo Here is an example of the benefit of having both in-place and out-place residual with the same name, so that we can change the backend effortlessly.
using ADNLPModels, NLPModels, ReverseDiff
function arglina(; n::Int = default_nvar, type::Val{T} = Val(Float64), kwargs...) where {T}
function F(r, x)
m = 2 * n
for i=1:n
r[i] = x[i] - T(2 / m) * sum(x[j] for j = 1:n) - 1
r[i + n] = -T(2 / m) * sum(x[j] for j = 1:n) - 1
end
return r
end
function F(x)
r = similar(x, 2 * n)
return F(r, x)
end
x0 = ones(T, n)
return ADNLPModels.ADNLSModel(F, x0, 2 * n, name = "arglina"; kwargs...)
end
nlp = arglina(n = 10)
F = nlp.F
output = typeof(nlp.meta.x0)(undef, nlp.nls_meta.nequ)
input = nlp.meta.x0
# Use ForwardDiff on x -> nlp.F(x)
jac_residual(nlp, input)
@show @allocated jac_residual(nlp, input) # 6528
# Use ReverseDiff on (r, x) -> nlp.F(r, x)
cfJ = ReverseDiff.JacobianTape(nlp.F, output, input)
ReverseDiff.jacobian!(cfJ, input)
@show @allocated ReverseDiff.jacobian!(cfJ, input) # 1808 !!
# Use ReverseDiff on (r, x) -> nlp.F(r, x) and pre-allocate the result
result = zeros(20, 10)
ReverseDiff.jacobian!(result, cfJ, input)
@show @allocated ReverseDiff.jacobian!(result, cfJ, input) # 0
Hi @tmigot. This is quite old and I forget what you were trying to explain. I don't see anything in the example above that would be complicated if the in-place function were called F!
. What am I missing?
The issue is that the function returns
> ADNLPModels.ADNLSModel(F, x0, 2 * n, name = "arglina"; kwargs...)
so that wouldn't work if we have an F
and an F!
.
I wouldn't merge it anyway because right now calling obj
for a NLS
allocates, while we were trying to make some efforts along this direction in here https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl/pull/241
This is an example of how we could use ADNLSModel for the least square objective.
Currently, the tests break because in the JuMP models currently implemented we generally don't have the
1/2
factor in front of the objective. #162@abelsiqueira @dpo Any opinion on this?