ewoodhead / QuantumNPA.jl

NPA in Julia
Apache License 2.0
17 stars 6 forks source link

Extracting solution & Injecting custom variables #4

Closed t-rothe closed 1 week ago

t-rothe commented 2 weeks ago

Hey, great to see some recent activity on optimising performance. I am very excited about this package as well. I was wondering whether there is some easy way to implement the following two things:

1) Obtaining the moment matrix elements from the solution that correspond to certain projectors / "the optimised CG representation". E.g. when working with PA = projector(1, 1:2, 1:2, full=true), PB = projector(2, 1:2, 1:2, full=true) then after optimising CHSH, for example, I want to obtain the optimised elements for $PA[1,1], PA[1,2], PB[1,1], PB[1,2], PA[1,1]PB[1,1], PA[1,1]PB[1,2], PA[1,2]PB[1,1], PA[1,2]PB[1,2]$

2) Injecting custom (JuMP) variables. So, say I have some 4D array of additional variables $\Delta$ and a given (constant) 4D array $p{obs}$. Then I'd like to minimise $sum(\Delta)$ such that $PA[i,k]*PB[j,l] - p{obs}[i,j,k,l] \leq \Delta[i,j,k,l]$ and $-\Delta[i,j,k,l] \leq PA[i,k]*PB[j,l] - p_{obs}[i,j,k,l]$

I had to resort to using Wittek's ncpol2sqpa.py via PythonCall.jl for both. E.g.

P = n2s.Probability([2, 2], [2, 2])
X = reshape(pyconvert(Vector, n2s.generate_variables("X", prod(size(p_obs)), commutative=true)), size(p_obs)...)
sdp_vars = pylist([P.get_all_operators()..., reshape(X,:)...])
sdp = n2s.SdpRelaxation( sdp_vars, verbose=false,
                        parallel=true)

objective = sum(X)  

#P - p_obs <= X
#-X <= P - p_obs
ineq_constraints = []
for (i,j,k,l) in Iterators.product(0:1, 0:1, 0:1, 0:1)
    push!(ineq_constraints, X[i+1,j+1,k+1,l+1] - (P([i,j], [k,l]) - p_obs[i+1,j+1,k+1,l+1]))
    push!(ineq_constraints, X[i+1,j+1,k+1,l+1] + (P([i,j], [k,l]) - p_obs[i+1,j+1,k+1,l+1]))
end

sdp.get_relaxation(level=level,
                objective=objective,
                momentinequalities=ineq_constraints,
                substitutions=P.substitutions)

sdp.solve(solver="mosek")

#Extracting solution:
P_sol = Array{Float64}(undef, 2, 2, 2, 2)
for (x, y, a, b) in Iterators.product(0:1, 0:1, 0:1, 0:1)
    P_sol[a+1, b+1, x+1, y+1] = pyconvert(Float64, sdp[sdp_vars[:P]([a, b], [x, y])])
end

but that's obviously horrible in terms of performance.

Maybe I am overlooking something here, but I couldn't find an easy way to do the above with QuantumNPA.jl. Is this anything feasible or out of reach for this package ? Would be great to have these as built-in features in the future as well.

ewoodhead commented 2 weeks ago

Hi, sorry to take a few days to reply.

I think the short answers to your questions are: no, QuantumNPA doesn't support what you're asking for out of the box. If you're familiar with JuMP then probably your best bet for the moment is to use functions from QuantumNPA such as npa_moment to find out what operators go where in the moment matrix, but otherwise write your own code working with JuMP to solve the problem you want. As it happens, Mateus posted example code for the primal problem here that you could try using as a starting point.

The reason 1. isn't supported out of the box is simply that QuantumNPA takes the NPA relaxation and gives JuMP the dual of that problem. (This is based on experience that the dual tends to be faster to solve.) You can't get the expectation values of operators from this because these don't exist as optimisation variables in the dual. But you can get them if you write code to solve the primal problem instead. Here's a quick minimal example for CHSH:

using QuantumNPA
using JuMP
using SCS

# Make a new JuMP model and make optimisation run quietly.
model = Model(SCS.Optimizer)
set_silent(model)

# Define Quantum operators and find the moment matrix at level 1 + A B.
@dichotomic A1 A2 B1 B2
gamma = npa_moment([A1, A2, B1, B2], "1 + A B")

# Get the non-identity monomials in the moment matrix.
mons = filter(!isidentity, monomials(gamma))

# Create JuMP variables indexed by these monomials. So there will be JuMP
# variables v[A1], v[A2], v[B1], ..., v[A1*A2*B2*B1].
@variable(model, v[mons])

# Construct the moment matrix in terms of these JuMP variables, and add it as
# a PSD constraint in the model.
G = gamma[Id] + sum(gamma[m]*v[m] for m in mons)
@constraint(model, G in PSDCone())

# Construct CHSH and add it as the objective to the model.
S = v[A1*B1] + v[A1*B2] + v[A2*B1] - v[A2*B2]
@objective(model, Max, S)

# Solve and print some info about the solution.
optimize!(model)
println("S = ", objective_value(model))
println("<A1 B1> = ", value(v[A1*B1]))

I guess if you've already got something like this then you can just add additional variables to the model.

araujoms commented 2 weeks ago

You can use the package Dualization; it allows you to specify the primal and solve the dual, without any performance loss. Just call

set_optimizer(model, Dualization.dual_optimizer(SCS.Optimizer))

before !optimize.

You can also extract the primal variables when specifying the dual, as I explain here.

ewoodhead commented 2 weeks ago

Oddly, going to the dual seems considerably slower in this instance; specifically, when using SCS.

Maybe whether the primal or dual is faster depends on which column the solver is in here?

araujoms commented 2 weeks ago

Yes, this will certainly depend on the solver.

ewoodhead commented 2 weeks ago

So I think as an answer to the original question for now: adapt & run something like the code above, and if you want to use Mosek then use Dualization.dual_optimizer(MOSEK.Optimizer) as the solver.

t-rothe commented 1 week ago

Thanks for the detailed answer(s)! That's a great workaround and sufficiently fast for my needs so far. (Though, in this context, I am not yet convinced about the efficiency benefit of Dualization.jl)

Apparently, I indeed misinterpreted something while previously inspecting the code of the package on my own, With the example(s) above and the explanation of the code you gave in https://github.com/ewoodhead/QuantumNPA.jl/pull/2 , it's a lot more comprehensible what the code is does and how to implement the primal, thanks!