Closed martincornejo closed 1 year ago
The recommended way to do this is just to set a new objective.
We supported ScalarCoefficientChange
because many solvers provide an efficient way of updating a linear objective coefficient. In contrast, most solvers lack support for explicitly modifying a quadratic coefficient, or if they do, it can require an expensive update of the Q matrix.
What's the use-case?
Here is how I would implement your example from Dualization (although I haven't tested):
using JuMP, MosekTools, Dualization
λ = rand(2,2)
r = rand(2,2)
ρ = rand()
model = Model(dual_optimizer(Mosek.Optimizer))
@variable(model, X[1:2, 1:2], PSD)
@variable(model, Y[1:2, 1:2])
@variable(model, t >= 0)
@constraint(model, c[i=1:2, j=1:2], Y[i, j] + X[i, j] == r[i, j])
@constraint(model, t >= sum(Y[i, j]^2 for i = 1:2, j=1:2))
@objective(model, Min, sum(λ[i, j] * Y[i, j] + ρ * t for i=1:2, j=1:2))
optimize!(model)
# To update `r`:
set_normalized_rhs(c[1, 1], 1.0)
# To update `λ`:
set_objective_coefficient(Y[1, 1], 1.0)
# To update `ρ`:
set_objective_coefficient(t, 1.0)
I have a consensus-ADMM algorithm, in which for each iteration the objective function is updated (somewhat similar to my MWE). By far the most convenient would be to set a new objective but the dualization is a big performance downgrade.
Thanks for the potential workaround example, but adding additional variables and constraints might penalize the performance and is a bit overcomplicated for my use-case. If it was not of the dulize-in-every-iteration issue, I couldjust set a new objective @objective(m, Min, dot(λ, r - X) + ρ * sum(x->x^2, r - X))
without worries.
We supported ScalarCoefficientChange because many solvers provide an efficient way of updating a linear objective coefficient
Are there additional advantages? I guess JuMP benefits also from in-place modifications, independent of if the solver supports special features. I have some constraints that are expensive to build, but with set_normalized_rhs
it is very easy to modify the RHS without computing everything from scratch. I can imagine a situation in which the quadratic term could be modified without having to build the whole function again.
So coming back to this, I'm still hesitant to add this to MOI because we don't have a solver that efficiently supports it. One fix to resolve the dualization issue might be for Dualization.jl to support resetting the objective without having to dualize the entire problem.
Yes, I think that is the reasonable way to continue.
cc @guilhermebodin thoughts?
Solvers can efficiently update just one term of the quadratic objective. This might be useful for progressive hedging and POI.
It is possible to hack into Dualization so you could only change the objective entirely but this would be unsafe for most use cases as it will mess up many maps between the primal and dual problems. I still think that a MOI.modify would be the best way.
Background: https://github.com/jump-dev/Dualization.jl/issues/90 and a discourse discussion
MOI.modify
makes in-place modifications of the coefficients of a function. ThroughMOI.ScalarCoefficientChange
it is possible to modify the linear terms of aScalarQuadraticFunction
, but as for now the only way to modify the quadratic components is by defining the function again.I think it would be beneficial to extend
MOI.ScalarCoefficientChange
or add a new type to support the in-place modification of the quadratic coefficients.