jump-dev / Convex.jl

A Julia package for disciplined convex programming
https://jump.dev/Convex.jl/stable/
Other
559 stars 119 forks source link

Add OptimizationSenseAtom #636

Closed odow closed 2 months ago

odow commented 2 months ago

Closes #310

codecov[bot] commented 2 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 97.87%. Comparing base (8a91e05) to head (50cce80).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #636 +/- ## ======================================= Coverage 97.86% 97.87% ======================================= Files 88 89 +1 Lines 5114 5128 +14 ======================================= + Hits 5005 5019 +14 Misses 109 109 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

odow commented 2 months ago

I don't know what's up with nightly

blegat commented 2 months ago

Looks good, maybe add some docstring for what this is used for ? IIUC, this is to get an error if we don't use it the right way, e.g.,

t = Variable()
add_constraint!(t, t >= x)
add_constraint!(t, t >= -x)
maximize(t)

Here, because we maximize t, our formulation does not yield the absolute value anymore Now if you do

maximize(OptimizationSenseAtom(t, MOI.MIN_SENSE))

then you get an error saying it's not DCP because t is convex.

ericphanson commented 2 months ago

Looking at this more, I think we are close to this working with Problems, which gives us the nicer syntax of minimize/maximize, and potentially more natural places to put constraints:

using Convex, LinearAlgebra, Clarabel
using Convex: AbstractExpr

# monkeypatch the existing `getproperty`
function Base.getproperty(p::Problem, s::Symbol)
    if s === :optval
        if getfield(p, :status) == Convex.MOI.OPTIMIZE_NOT_CALLED
            return nothing
        else
            return Convex.objective_value(p)
        end
    elseif s === :size
        return p.objective.size
    end
    return getfield(p, s)
end

function lamb_min(A::AbstractExpr)
    t = Variable()
    n = size(A, 1)
    n == size(A,2) || throw(ArgumentError())
    p = maximize(t, A - t*Matrix(1.0I, n, n) ⪰ 0)
    return p
end

p = maximize( lamb_min(A) + 1, [ A >= 0, A[1,1] == 2.0] )

solve!(p, Clarabel.Optimizer)
julia> print(p.model)
Maximize ScalarAffineFunction{Float64}:
 1.0 + 1.0 v[5]

Subject to:

VectorAffineFunction{Float64}-in-Zeros
 ┌               ┐
 │-2.0 + 1.0 v[1]│
 └               ┘ ∈ Zeros(1)

VectorAffineFunction{Float64}-in-Nonnegatives
 ┌              ┐
 │0.0 + 1.0 v[1]│
 │0.0 + 1.0 v[2]│
 │0.0 + 1.0 v[3]│
 │0.0 + 1.0 v[4]│
 └              ┘ ∈ Nonnegatives(4)

VectorAffineFunction{Float64}-in-PositiveSemidefiniteConeSquare
 ┌                                    ┐
 │0.0 + 1.0 v[1] - 1.0 v[5]           │
 │0.0 + 1.0 v[3] + 1.0 v[2] - 1.0 v[3]│
 │0.0 + 1.0 v[3]                      │
 │0.0 + 1.0 v[4] - 1.0 v[5]           │
 └                                    ┘ ∈ PositiveSemidefiniteConeSquare(2)

However, it's not fully correct yet, since if I flip maximize to minimize in the definition of lamb_min I don't get an error.

IMO if we get this working, it will be easier for users than introducing a new atom.

odow commented 2 months ago

Okay let me take a look

odow commented 2 months ago

Closing for now. I'll take another shot at this.