Closed paraynaud closed 1 year ago
Following #86
@paraynaud The more I think of it the more it feels slightly strange than
nlp.model
countshprod
evaluation while no hessian-products was used.
I don't agree, since we use the quasi-Newton operator within hprod
, it is consistent to count it with neval_hprod
.
Otherwise, to keep it consistent, we have to make a method approxprod
similar to hprod
, but it will mess up the trunk
implementation and probably the other optimization methods from JSOSolvers.
I am not sure to fully understand. Let's try to clarify, we have a
nlp = ADNLPModel(f, x0)
model = LBFGSSolver(nlp)
and we do
hprod!(model, x0)
With your PR currently
neval_hprod(nlp) == 1
neval_hprod(model) == 1
because the LBFGSModel counter is synced with the nlp (actually, model.counters
doesn't exist).
I agree that neval_hprod(model) == 1
as the "hessian" of model
was used.
However, I find neval_hprod(nlp) == 1
slightly confusing because no hessian of nlp
has been used.
I think it would be accurate to have
neval_hprod(nlp) == 0 # because no hessian was used
neval_hprod(model) == 1
I may have been confusing with LBFGS, which is a LBFGSModel used in trunk and not the inverse LBFGS linesearch from JSOSolvers.
I use:
nlp = ADNLPModel(f, x0)
model = LBFGSModel(nlp)
ges = trunk(model)
because the LBFGSModel counter is synced with the nlp (actually, model.counters doesn't exist).
correct!
I agree that neval_hprod(model) == 1 as the "hessian" of model was used.
However, I find neval_hprod(nlp) == 1 slightly confusing because no hessian of nlp has been used.
I did not think about it this way and I get your point.
I am not sure how to distinguish the two neval_hprod
methods since counter
is paired to nlp
.
A suggestion: maybe we can link the neval_hprod
method to the counter
from model.op.counter
to return the number of mul!
performed by model.op
, and delete the incrementation of nlp.counter.neval_hprod
I add.
By doing that, neval_hprod(nlp)
will stay at 0
all the time (as long a no hprod
are performed directly on nlp
).
What do you think about it?
Yes, that would work. If I understand correctly, you plan to do something like this:
neval_hprod(model::QuasiNewtonModel) = model.op.counter
?
Do not hesitate to add this as a unit test:
neval_hprod(nlp) == 0 # because no hessian was used
neval_hprod(model) == 1
@tmigot I gave up writing it as
NLPModels.neval_hprod(nlp::QuasiNewtonModel) = nlp.op.nprod
so I wrote it as
NLPModels.neval_hprod(nlp::LBFGSModel) = nlp.op.nprod
NLPModels.neval_hprod(nlp::LSR1Model) = nlp.op.nprod
and it behaves accordingly to our discussion.
I made a slight distinction in the tests, because a LSR1Model makes a prod
during the push!
method, which is not the case of a LBFGSModel.
Codecov Report
Base: 92.01% // Head: 92.04% // Increases project coverage by
+0.02%
:tada:Additional details and impacted files
```diff @@ Coverage Diff @@ ## main #87 +/- ## ========================================== + Coverage 92.01% 92.04% +0.02% ========================================== Files 6 6 Lines 839 842 +3 ========================================== + Hits 772 775 +3 Misses 67 67 ``` | [Impacted Files](https://codecov.io/gh/JuliaSmoothOptimizers/NLPModelsModifiers.jl/pull/87?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers) | Coverage Δ | | |---|---|---| | [src/quasi-newton.jl](https://codecov.io/gh/JuliaSmoothOptimizers/NLPModelsModifiers.jl/pull/87?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL3F1YXNpLW5ld3Rvbi5qbA==) | `94.28% <100.00%> (+0.34%)` | :arrow_up: | | [src/NLPModelsModifiers.jl](https://codecov.io/gh/JuliaSmoothOptimizers/NLPModelsModifiers.jl/pull/87?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers#diff-c3JjL05MUE1vZGVsc01vZGlmaWVycy5qbA==) | `100.00% <0.00%> (ø)` | | Help us with your feedback. Take ten seconds to tell us [how you rate us](https://about.codecov.io/nps?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers). Have a feature suggestion? [Share it here.](https://app.codecov.io/gh/feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaSmoothOptimizers):umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.