jump-dev / JuMP.jl

Modeling language for Mathematical Optimization (linear, mixed-integer, conic, semidefinite, nonlinear)
http://jump.dev/JuMP.jl/
Other
2.24k stars 396 forks source link

Fix querying dual of symmetric equality constraints #3797

Closed odow closed 3 months ago

odow commented 3 months ago

Closes #3796

I need to think about whether there is a better option. I don't think we should just fix the default reshape_vector because we don't need to modify anything for the primal? Just the dual?

codecov[bot] commented 3 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 98.40%. Comparing base (bf662d4) to head (39b99bc).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #3797 +/- ## ========================================== + Coverage 98.39% 98.40% +0.01% ========================================== Files 44 44 Lines 5907 5956 +49 ========================================== + Hits 5812 5861 +49 Misses 95 95 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

odow commented 3 months ago

Okay, this looks like the problem. We never had any tests for this.

blegat commented 3 months ago

You can use JuMP.dual_shape. This is similar to what we do in https://github.com/jump-dev/MathOptInterface.jl/blob/a15b67fe47ad20caf316eb1bba0369a46ceb5a34/src/Bridges/Constraint/bridges/square.jl#L401-L411 The rule in dual is to multiply by the adjoint of the linear transformation we did. In case the matrix was symmetric, we assume that the transformation was taking the average of the two off-diagonal entries hence the transpose would set half the dual to each entry. But, because the triangular dot product multiplies by 2, the adjoint does not divide by 2 and just set the same value to both off-diagonal. In here, because Zeros has the standard dot product, the adjoint and transpose are the same so assuming that the transformation takes the average, this PR is the adjoint. Could you add a small explanation in the comments explaining that we are simply doing the adjoint (assuming we used the average).

odow commented 3 months ago

You can use JuMP.dual_shape

I don't think so, or at least, not for SymmetricMatrixShape and HermitianMatrixShape, because these are a special case of us interpreting Matrix -in- Zeros. If we implemented dual_shape it would also modify things like Matrix in PSD.

Or we need a new matrix shape just for these Matrix-in-Zeros constraints.

blegat commented 3 months ago

Or we need a new matrix shape just for these Matrix-in-Zeros constraints.

Yes, maybe we need to do that. I'm in favor

odow commented 3 months ago

Name suggestions for the primal and dual shapes?

SquareSymmetricMatrixShape and SquareSymmetricMatrixDualShape?

odow commented 3 months ago

Okay, instead of adding more shapes, I've added needs_adjoint_dual kwarg.

araujoms commented 3 months ago

I'm afraid this PR introduced a bug: before if I tried to constrain A == B for Symmetric A and Hermitian B it would give me an error. Now it silently fails.

using JuMP
using LinearAlgebra
import Hypatia
import Clarabel
import Ket
import Random

function dual_test(d)
    Random.seed!(1)
    ρ = Ket.random_state(Float64, d^2, 1)

    model = Model()
    @variable(model, σ[1:d^2, 1:d^2] in PSDCone())
    @variable(model, λ)
    noisy_state = Hermitian(ρ + λ * I(d^2))
    @constraint(model, witness_constraint, σ == noisy_state)
    @objective(model, Min, λ)
    @constraint(model, Ket.partial_transpose(σ, 2, [d, d]) in PSDCone())

    set_optimizer(model, Hypatia.Optimizer)
    #set_optimizer(model, Clarabel.Optimizer)
    optimize!(model)

    W = dual(witness_constraint)
    return W
end
blegat commented 3 months ago

Oops, can you open an issue so that we can track this ?

odow commented 3 months ago

Now it silently fails

No, now it silently succeeds :smile: