Closed ChrisRackauckas closed 3 months ago
No that's not the issue. This is part of the OptinizationBase merge. I am looking at it. It will be needed by other backends as well.
But why is the PRIMA example using autodiff in the first place? I agree that there's two issues, but this doc example shouldn't be differentiating anything.
Because in the constrained case PRIMA expects a different interface for linear and nonlinear constraints
Ω = { x ∈ ℝⁿ | xl ≤ x ≤ xu, Aₑ⋅x = bₑ, Aᵢ⋅x ≤ bᵢ, cₑ(x) = 0, and cᵢ(x) ≤ 0 }
I use autodiff to create this representation from the functional interface
Also, in general - we would never error if an AD backed is passed even for derivative free optimizer. We do throw an error when it's the other way around though, and that should get better with https://github.com/SciML/Optimization.jl/pull/715 and corresponding PR in SciMLBase. Though that's not entirely correct yet.
oh it requires knowing the linear operators?
Yes
Fixes https://github.com/SciML/Optimization.jl/issues/719