JuliaSmoothOptimizers / ADNLPModels.jl

Other
37 stars 14 forks source link

Add reverse-forward hprod #171

Closed tmigot closed 1 year ago

tmigot commented 1 year ago

only for objective function hprod

codecov[bot] commented 1 year ago

Codecov Report

Patch coverage: 97.11% and project coverage change: +0.14 :tada:

Comparison is base (3d8e3ea) 95.32% compared to head (a5b2d9d) 95.47%.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #171 +/- ## ========================================== + Coverage 95.32% 95.47% +0.14% ========================================== Files 14 14 Lines 1647 1744 +97 ========================================== + Hits 1570 1665 +95 - Misses 77 79 +2 ``` | [Impacted Files](https://app.codecov.io/gh/JuliaSmoothOptimizers/ADNLPModels.jl/pull/171?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers) | Coverage Δ | | |---|---|---| | [src/ADNLPModels.jl](https://app.codecov.io/gh/JuliaSmoothOptimizers/ADNLPModels.jl/pull/171?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL0FETkxQTW9kZWxzLmps) | `100.00% <ø> (ø)` | | | [src/predefined\_backend.jl](https://app.codecov.io/gh/JuliaSmoothOptimizers/ADNLPModels.jl/pull/171?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL3ByZWRlZmluZWRfYmFja2VuZC5qbA==) | `100.00% <ø> (ø)` | | | [src/reverse.jl](https://app.codecov.io/gh/JuliaSmoothOptimizers/ADNLPModels.jl/pull/171?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL3JldmVyc2Uuamw=) | `65.45% <93.61%> (+19.18%)` | :arrow_up: | | [src/sparse\_hessian.jl](https://app.codecov.io/gh/JuliaSmoothOptimizers/ADNLPModels.jl/pull/171?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL3NwYXJzZV9oZXNzaWFuLmps) | `100.00% <100.00%> (ø)` | | ... and [2 files with indirect coverage changes](https://app.codecov.io/gh/JuliaSmoothOptimizers/ADNLPModels.jl/pull/171/indirect-changes?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers)

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.

tmigot commented 1 year ago

Connected to #165

When it works, the results are promising https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl/blob/benchmark/benchmark/2023-07-07_adnlpmodels_benchmark_hprod_optimized_nscal_1000_mono.png

For instance, the following fails:

nlp = OptimizationProblems.ADNLPProblems.clnlbeam(n = 1000, hprod_backend = ADNLPModels.ReverseDiffADHvprod);
n = nlp.meta.nvar
v = [sin(T(i) / 10) for i=1:n]
hprod(nlp, get_x0(nlp), v)

This can straightforwardly be adapted to compute the sparse objective Hessian.

github-actions[bot] commented 1 year ago
Package name latest stable
CaNNOLeS.jl
DCISolver.jl
DerivativeFreeSolvers.jl
JSOSolvers.jl
NLPModelsIpopt.jl
OptimizationProblems.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl
tmigot commented 1 year ago

@jbcaillau @ocots @amontoison @BaptisteCbl

I added the hprod for the Lagrangian Hessian and optimized sparse hessian. This is a clear improvement, see the benchmarks:

ocots commented 1 year ago

Test with this file: https://github.com/ocots/JumpVSADNLP/blob/master/bench.jl

Hessian of Lagrangian
ADNLP
  1.191 ms (157 allocations: 141.55 KiB)
JuMP
  60.402 μs (137 allocations: 24.08 KiB)
Jacobian of constraints
ADNLP
  10.509 μs (22 allocations: 20.70 KiB)
JuMP
  13.329 μs (15 allocations: 19.80 KiB)

Seems to rock!

ocots commented 1 year ago

Great job!

New release please 🙏🙏🙏 (JuliaCon next Tuesday)

tmigot commented 1 year ago

Let me add some documentation on the new features and improvements, and then the new release (likely tomorrow).

jbcaillau commented 1 year ago

Many thanks @tmigot for the timely update. Doing some more tests with @PierreMartinon (Goddard test case).

dpo commented 1 year ago

Amazing results @tmigot! Congratulations!

jbcaillau commented 1 year ago

@tmigot @amontoison check this

amontoison commented 1 year ago

Thanks @jbcaillau! Our computation of the sparse Hessian is 20x / 30x slower than JuMP. We should be able to improve it by taking into account the symmetry of the Hessian to limit the number of directional derivatives of the gradient.

jbcaillau commented 1 year ago

@amontoison thanks for the feedback. probably some profiling to do on our side as our objective and constraints evaluations might be far from being efficiently evaluable compared to the pure Julia / JuMP code written for the comparison.

ps. greetings from JuliaCon2023 with @ocots @gergaud ... and @tmigot !