sisl / NeuralVerification.jl

Methods to soundly verify deep neural networks
MIT License
223 stars 50 forks source link

Dealing with disjunction of linear constraints #67

Open tomerarnon opened 5 years ago

tomerarnon commented 5 years ago

Optimization based methods (those that take complements) work well for "exclusion zone" type properties but not as well when the property is f(X) \in Y for convex Y, because the complement leads to a disjunction of linear constraints. To solve this type of problem one needs to solve N optimization problems (the number of constraints) and show that all are infeasible. At the moment the only place this is described is the new CARS workshop ACAS notebook. Also, we can make the process of solving all N standard operating procedure for these types of problems.

chelseas commented 4 years ago

You can represent this disjunction of N linear constraints using a ReLU or max piecewise linear disjunction. E.g. if you want x in [a,b] and then you need to certify that x < a | x > b is unsatisfiable, you can create new variables y = a - x and z = x - b. Both y and z should be <=0 if the problem is "in bounds" so the algo should search for solutions where either is >0, which you can write as max(y, z) > 0. This can be nested to represent N disjunctions. BUT, superlevel sets of a convex function are not necessarily convex. SO, solving N problems with only halfplane constraints sounds like it may result in N convex problems, which are far easier to solve, so that may be faster.

tomerarnon commented 4 years ago

For the record, since this came up again, the initial proposal was to implement essentially:

# For solvers which can only handle HalfSpace output constraints like Reluplex.
# Rather than pass things in piecemeal as below, this operation can happen automatically
# any time problem.output is an AbstractPolytope that isn't a HalfSpace.
function convex_nv(solver, net, input, convex_output)
    res = nothing
    for hs in constraints_list(convex_output)
        prob = Problem(net, input, hs)
        res = solve(solver, prob)
        res.status === :violated && return res
    end
    res
end

Today we noted that instead of the above, we can likely do

encode_entire_problem_except_output(model, problem)
for hs in constraints_list(convex_output)
    new_constraints = add_set_constraint!(model, hs, z[end])
    # rest of the algorithm...
    result.status == :violated && return result
    # at the end:
    delete.(model, new_constraints)
end

for most algorithms that would be affected by this. We don't have to reencode the problem each time this way, and may also be able to take advantage of a warm-start in some cases.