Closed ForceBru closed 6 months ago
Hi @ForceBru !
true # but there's an error message, so the solution shouldn't be reliable?
When the algorithm returns an error message, the stats.solution
corresponds to the last iterate, so it can still be of interest. As you noticed, the solution computed is actually very close to the solution you expected.
The issue in your example is that the problem is not continuously differentiable.
using NLPModels
nlp = ADNLPModel(x -> sum(x.^2) |> sqrt, ones(10))
grad(nlp, zeros(10)) # the square root function is not differentiable in 0, return NaNs
and so there is no theoretical guarantees.
In general, optimizing x -> sum(x.^2)
is equivalent to norm
though.
If you are interested in least squares problems, I would recommend using ADNLSModel
instead of ADNLPModel
.
If you are interested in nonsmooth formulations like your example, I would first look into https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl
@ForceBru Can we close this?
As @tmigot mentioned, the issue arises because the iterates converge to a place where the objective is not differentiable.
When I use the example from the docs everything works OK:
Throw in a square root and I get the error message:
However, the solution itself is fine: