JuliaReach / LazySets.jl

Scalable symbolic-numeric set computations in Julia
https://juliareach.github.io/LazySets.jl/
Other
227 stars 32 forks source link

Inconsistency in linear_map between vrep and division algorithm #1928

Closed martenlienen closed 4 years ago

martenlienen commented 4 years ago

Hey, check out the following example. It defines a polytope P that is restricted to 0 in its last two coordinates. It is bounded and non-empty. Therefore, applying a linear map should clearly not change its emptiness. And yet, the two algorithms disagree. "vrep" simply maps the vertices and finds the correct answer that M * P is also non-empty. But "division" produces an empty polytope.

I have looked at the implementation of _linear_map_hrep and it solves a system of linear equations to update the constraints. I am no expert in geometry but wouldn't a matrix with less rows than columns, i.e. a projection into a lower-dimensional space, require a proper variable elimination procedure like Fourier-Motzkin, even if the matrix has full row rank?

import Polyhedra
using LazySets

B = [
    -0.0 -0.0 -1.0 -0.0;
    -0.0 -0.0 -0.0 -1.0;
    0.0 0.0 1.0 0.0;
    0.0 0.0 0.0 1.0;
    2.9947065529531898 -1.1028295687569036 -0.0 -0.0;
    3.04356887693959 1.0 -0.0 -0.0;
    -3.043568876939177 -1.0 -0.0 -0.0;
    -2.7154753896639945 1.0 -0.0 -0.0;
    -3.1209566358111682 -1.0 -0.0 -0.0;
    -3.117487135897941 1.0 -0.0 -0.0;
    3.1209566358106873 1.0 -0.0 -0.0;
    4.400608112720001 -1.4115882186157707 -0.0 -0.0
]
b = [
    0.0,
    0.0,
    -0.0,
    -0.0,
    -1.0,
    3.98729727200628,
    -3.3782603325110476,
    1.803258851263224,
    -3.4045375136630747,
    1.6766140435881927,
    4.0235794003261,
    -1.0
]
P = HPolytope(B, b)
@assert !isempty(P)
@assert isbounded(P, false)

M = [
    -0.36650336 1.6418293 -0.6497878 -1.5073241;
    -0.3861024 -1.0513428 -1.083672 -0.6236442;
    0.41887435 -0.30923694 1.6104311 1.7603866
]

@assert !isempty(linear_map(M, P; algorithm="vrep"))
@assert !isempty(linear_map(M, P; algorithm="division"))
mforets commented 4 years ago

Thanks for opening the issue, I confirm that there is a bug in "division". Here is a smaller example for which i think is the same problem:

H = Hyperrectangle(low=[0, 0.5, 0], high=[1, 1, 0.])
@show isempty(H)
@show vertices_list(H)

M = ones(2, 3)

MH = linear_map(M, H)

@show vertices_list(MH);

P = convert(HPolytope, H)
@show isempty(P)
@show vertices_list(P)

MP_div = linear_map(M, P, algorithm="division")
@show isempty(MP_div) # should give false!

MP_vrep = linear_map(M, P, algorithm="vrep")
@show isempty(MP_vrep)
@show vertices_list(MP_vrep)

Output:

isempty(H) = false
vertices_list(H) = Array{Float64,1}[[1.0, 1.0, 0.0], [0.0, 1.0, 0.0], [1.0, 0.5, 0.0], [0.0, 0.5, 0.0]]
vertices_list(MH) = Array{Float64,1}[[0.5, 0.5], [2.0, 2.0]]
isempty(P) = false
vertices_list(P) = Array{Float64,1}[[1.0, 1.0, 0.0], [0.0, 1.0, 0.0], [1.0, 0.5, 0.0], [0.0, 0.5, 0.0]]
isempty(MP_div) = true
isempty(MP_vrep) = false
vertices_list(MP_vrep) = Array{Float64,1}[[0.5, 0.5], [2.0, 2.0]]
mforets commented 4 years ago

a projection into a lower-dimensional space, require a proper variable elimination procedure

Yes, i think that is the correct approach for the hrep with a rectangular linear map, and "division" is basically flawed if A is not invertible .. (i remember to do some experiments and find t was useful in some problems, but i have to think about it..)

Here is a quick implementation for the variable elimination approach: if P : Ax <= b and y = Mx, we consider the vector z = [y, x], write the equalities and the inequalities, and then eliminate the last length(x) variables using Polyhedra.eliminate calls to a backend library such as CDDLib in this case:

using CDDLib

# see the def of P and M from my previous comment
id_out = Matrix(1.0I, size(M, 1), size(M, 1))
Ax_leq_b = [Polyhedra.HalfSpace(vcat(zeros(2), c.a), c.b) for c in constraints_list(P)]
y_eq_Mx = [Polyhedra.HyperPlane(vcat(-id_out[i, :], M[i, :]), 0.0) for i in 1:size(M, 1)];
julia> Phrep = Polyhedra.hrep(y_eq_Mx, Ax_leq_b)
H-representation Polyhedra.Intersection{Float64,Array{Float64,1},Int64}:
2-element iterator of HyperPlane{Float64,Array{Float64,1}}:
 HyperPlane([-1.0, -0.0, 1.0, 1.0, 1.0], 0.0)
 HyperPlane([-0.0, -1.0, 1.0, 1.0, 1.0], 0.0),
6-element iterator of Polyhedra.HalfSpace{Float64,Array{Float64,1}}:
 HalfSpace([0.0, 0.0, 1.0, 0.0, 0.0], 1.0)
 HalfSpace([0.0, 0.0, 0.0, 1.0, 0.0], 1.0)
 HalfSpace([0.0, 0.0, 0.0, 0.0, 1.0], 0.0)
 HalfSpace([0.0, 0.0, -1.0, 0.0, 0.0], -0.0)
 HalfSpace([0.0, 0.0, 0.0, -1.0, 0.0], -0.5)
 HalfSpace([0.0, 0.0, 0.0, 0.0, -1.0], -0.0)

julia> Phrep_cdd = polyhedron(Phrep, CDDLib.Library(:float))
Polyhedron CDDLib.Polyhedron{Float64}:
2-element iterator of HyperPlane{Float64,Array{Float64,1}}:
 HyperPlane([-1.0, -0.0, 1.0, 1.0, 1.0], 0.0)
 HyperPlane([-0.0, -1.0, 1.0, 1.0, 1.0], 0.0),
6-element iterator of Polyhedra.HalfSpace{Float64,Array{Float64,1}}:
 HalfSpace([0.0, 0.0, 1.0, 0.0, 0.0], 1.0)
 HalfSpace([0.0, 0.0, 0.0, 1.0, 0.0], 1.0)
 HalfSpace([0.0, 0.0, 0.0, 0.0, 1.0], 0.0)
 HalfSpace([0.0, 0.0, -1.0, 0.0, 0.0], -0.0)
 HalfSpace([0.0, 0.0, 0.0, -1.0, 0.0], -0.5)
 HalfSpace([0.0, 0.0, 0.0, 0.0, -1.0], -0.0)

Fourier-Motzkin says that it cannot handle the linearity:

julia>  Peli_fm = Polyhedra.eliminate(Phrep_cdd, 3:5, Polyhedra.FourierMotzkin())
Cannot handle linearity

Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] myerror(::Int32) at /home/mforets/.julia/packages/CDDLib/AcD7T/src/error.jl:23
 [3] dd_fourierelimination at /home/mforets/.julia/packages/CDDLib/AcD7T/src/operations.jl:223 [inlined]
 [4] fourierelimination(::CDDInequalityMatrix{Float64,Float64}) at /home/mforets/.julia/packages/CDDLib/AcD7T/src/operations.jl:233
 [5] eliminate(::CDDLib.Polyhedron{Float64}, ::UnitRange{Int64}, ::FourierMotzkin) at /home/mforets/.julia/packages/CDDLib/AcD7T/src/polyhedron.jl:171
 [6] top-level scope at In[471]:1

But block elimination works:

julia> Peli_block = Polyhedra.eliminate(Phrep_cdd, 3:5, Polyhedra.BlockElimination())
Polyhedron CDDLib.Polyhedron{Float64}:
1-element iterator of HyperPlane{Float64,Array{Float64,1}}:
 HyperPlane([1.0, -1.0], 0.0),
5-element iterator of Polyhedra.HalfSpace{Float64,Array{Float64,1}}:
 HalfSpace([-0.0, -0.0], 1.0)
 HalfSpace([1.0, -0.0], 2.0)
 HalfSpace([-1.0, -0.0], -0.5)
 HalfSpace([-0.0, -0.0], 0.5)
 HalfSpace([-0.0, -0.0], 0.0)

julia> Peli_block = Polyhedra.removeduplicates(hrep(Peli_block))
H-representation CDDInequalityMatrix{Float64,Float64}:
1-element iterator of HyperPlane{Float64,Array{Float64,1}}:
 HyperPlane([1.0, -1.0], 0.0),
3-element iterator of Polyhedra.HalfSpace{Float64,Array{Float64,1}}:
 HalfSpace([-0.0, 0.0], 1.0)
 HalfSpace([0.5, 0.5], 2.0)
 HalfSpace([-0.5, -0.5], -0.5)

# check equivalence by dual inclusion wrt the "correct" transformation
julia> isequivalent(convert(HPolyhedron, Peli_block), MH)
true

Edit: trying this idea in the original problem also works; isequivalent(linear_map(M, P, algorithm="vrep"), convert(HPolytope, Peli_block)) returns true.

martenlienen commented 4 years ago

I think what you propose is definitely correct in every case but elimination is expensive. Is it required to eliminate all of the old variables anytime the matrix is not invertible? Let r = rank(M) and m, n = size(M). If m > n and r == n, would it be correct to embed the polytope into R^m-space by appending zeros (I mean extending all constraints to m dimensions and constraining the last m - n dimensions to 0) and extend M to size m x m with columns from the identity matrix to make it invertible and then use the inverse algorithm? I think if r < n, you always need the complete elimination.

mforets commented 4 years ago

I think what you propose is definitely correct in every case but elimination is expensive.

True, but the elimination is still less expensive than passing to the vertex representation. So it makes sense that we add this elimination method as a new algorithm for the concrete linear map.

Is it required to eliminate all of the old variables anytime the matrix is not invertible?

Very interesting. I thought about your idea today and I think that it is conceptually correct! On some random examples, the speedup obtained is huge.

For reference, below is the code for the linear map with lift (some people use the word "lift" as opposed to "project"). The benchmark with the other methods can be found in this notebook together with some test code.

# see discussion in LazySets #1928
function _linear_map_hrep_lift(M, P)
    m, n = size(M)
    r = rank(M)
    @assert m > n
    @assert r == n

    # we extend the linear map to an mxm matrix with identity in the last m-n columns
    Id_ext = Matrix([Diagonal(ones(m-n)); zeros(n, m-n)])
    Mext = hcat(M, Id_ext)

    id_out = Matrix(1.0I, m-n, m-n)

    # append zeros to the existing constraints, in the last m-n coordinates
    cext = [LazySets.HalfSpace(vcat(c.a, zeros(m-n)), c.b) for c in constraints_list(P)]

    # now fix the last m-n coordinates to zero
    cext = vcat(cext, [LazySets.HalfSpace(vcat(zeros(n), id_out[i, :]), 0.0) for i in 1:(m-n)],
                      [LazySets.HalfSpace(vcat(zeros(n), -id_out[i, :]), 0.0) for i in 1:(m-n)])

    Pext = HPolytope(cext)

    # now Mext is invertible and we can apply the linear with inverse function
    return linear_map(Mext, Pext, algorithm="inverse")
end
martenlienen commented 4 years ago

That is exactly what I had in mind. I just noticed one thing. To ensure that the extended matrix has full rank and is invertible, the extension with columns from the identity matrix cannot happen arbitrarily. That is because M has rank n but that does not mean that any subset of n rows of M has rank n. Imagine a matrix M with an invertible nxn matrix as its first n rows and m - n copies of the nth row below. Then the 1s from the m-identity matrix need to be placed in the last m - n rows to make the extended matrix invertible. I think the algorithm has to be

  1. find a subset of n rows such that M restricted to these rows is invertible
  2. distribute the 1s across the remaining rows

Do you know how to do the first step in a smarter way than brute-forcing all combinations? For most matrices the rows 1..n will already fit but there are these special cases.

mforets commented 4 years ago

I think that we can use QR factorization to find new suitable columns orthogonal to the column space of M. This is the code that i have so far, which is seemingly correct in the examples that i tried:

using LinearAlgebra

function extend(M)
    Q, R = qr(M)
   # i think thati if rank(M) = size(M, 2) => a = size(R, 1) + 1 always is ok
    a = iszero(R) ? 1 : size(R, 1)+1
    b = size(Q, 1)
    ext = Matrix(Q')[a:b, :]
    Mext = hcat(M, ext')
    return Mext
end
martenlienen commented 4 years ago

As so often looking at the bigger picture and considering the column space reveals the solution. I also think that a = size(R, 1) + 1 is always correct as long as you assert that rank(M) == size(M, 2). But I would use size(R, 2) because that size(R, 1) works is an implementation detail of qr I would say. How about the following?

using LinearAlgebra

function extend(M)
    m, n = size(M)
    @assert rank(M) == n && m > n

    Q, R = qr(M)
    return hcat(M, Q[:, (n + 1):end])
end
mforets commented 4 years ago

Yes, i agree with those considerations and the implementation.

Moreover, the inverse of the extended matrix is readily available in extend, since it is inv_Mext = vcat(inv(R) * Q', Q2') if Q2 = Q[:, (n + 1):end]. The LazySets.linear_map function accepts a kwarg inverse=... so we gain a bit by passing it directly.

mforets commented 4 years ago

@cqql FYI some of the ideas discussed in this thread are now part of the new minor release https://github.com/JuliaReach/LazySets.jl/releases/tag/v1.28.0, and I took the liberty of adding your real name (from your github profile) in the release log. The other part of the changes is planned for the next release.

martenlienen commented 4 years ago

Great! And thanks for mentioning me in the release notes. Feels good :)