Open tmigot opened 1 year ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 92.54%. Comparing base (
9585863
) to head (e8d8059
). Report is 3 commits behind head on main.:exclamation: Current head e8d8059 differs from pull request most recent head 64a857d
Please upload reports for the commit 64a857d to get more accurate results.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@amontoison the issue here is that the jacobian of the residual is most likely not correct so the test won't pass :s
Is it possible to isolate which components are wrong? First do we have the correct sparsity pattern?
The Jacobian was computed explicitly here: https://www.gerad.ca/en/papers/G-2020-42. That should be what’s implemented.
@amontoison Could you have your test print the errors?
@amontoison Could you have your test print the errors?
@tmigot You opened the PR and know better than me what is implemented in NLPModelsTest.jl
.
Is it possible?
You can compare the obtained result with auto diff eventually ? https://jso.dev/ADNLPModels.jl/dev/mixed/ you can build an ADNLSModel from a BundleAdjustmentModel.
jacobian_residual_check returns a Dict{Tuple{Int, Int}, Float64}
where the Tuple is the indices of the Jacobian matrix and Float64 the error when it is nonzero. In other words, those are the wrong indices.
The error will be nonzero, because it's finite differences (except in special cases), so it's the magnitude of the error that we should look at.
I checked the Jacobians with ADNLPModels.jl and it's only correct when we use x = nlp.meta.x0
.
Something is wrong in the function coord_residual
implemented in this package.
In #92, I added an option in the benchmarks to also check the Jacobian.
https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl/issues/101