jump-dev / HiGHS.jl

A Julia interface to the HiGHS solver
https://highs.dev
MIT License
112 stars 16 forks source link

Support MOI.copy_to #41

Closed doorisajar closed 3 years ago

doorisajar commented 3 years ago

I have a mass balance type LP, and thought I'd try out HiGHS. Some quick benchmarking showed HiGHS to be significantly slower than Clp on my problem. Since the README says sharing such examples could be helpful, I thought I'd post a toy example that illustrates the performance delta I observed. Hope it's helpful, if not, feel free to close this.Either way, I'll be following HiGHS with interest! 😄

using JuMP
using Clp
using HiGHS
using BenchmarkTools

function mass(v, n)  
    m = map(a -> repeat([a], Int(n/4)), v)  |> 
        b -> reduce((x, y) -> vcat(x, y), b)

    return m
end

function solve_lp(c, n, m1, m2, optimizer)

    model = Model(optimizer)

    @variable(model, 0 <= x[i = 1:n] <= 5)
    @variable(model, 0 <= y[i = 1:n] <= 3)

    @variable(model, l[i = 1:n] >= 0)

    @constraint(model, vlim[i = 1:n], x[i] + y[i] <= m1[i])

    @constraint(model, ylim[i = 1:n], y[i] <= m2[i])

    @constraint(model, dynamics[i = 2:n], l[i] <= x[i-1] * m1[i-1] + y[i-1] * m2[i-1])

    @objective(model, Max, sum(c .* x))

    optimize!(model)

    return (raw_status(model), objective_value(model))

end

# definitions
c = [5:0.1:25; 20:-0.1:1]

n = length(c)

m1 = mass([2, 0, 1, 4], n)
m2 = mass([0, 2, 3, 4], n)

# solve with Clp
@benchmark solve_lp(c, n, m1, m2, Clp.Optimizer) samples=100
# memory estimate:  5.04 MiB
# allocs estimate:  49097
# --------------
# minimum time:     5.534 ms (0.00% GC)
# median time:      6.297 ms (0.00% GC)
# mean time:        7.050 ms (10.43% GC)
# maximum time:     15.391 ms (55.42% GC)
# --------------
# samples:          100
# evals/sample:     1

# solve with HiGHS
@benchmark solve_lp(c, n, m1, m2, HiGHS.Optimizer) samples=100
# memory estimate:  4.27 MiB
# allocs estimate:  51562
# --------------
# minimum time:     19.587 ms (0.00% GC)
# median time:      20.602 ms (0.00% GC)
# mean time:        21.578 ms (3.40% GC)
# maximum time:     32.377 ms (26.83% GC)
# --------------
# samples:          100
# evals/sample:     1
odow commented 3 years ago

Thanks for the report. I was interested to see if direct_model made a difference:

using JuMP
using Clp
using HiGHS
using BenchmarkTools

function mass(v, n)
    return map(a -> repeat([a], Int(n/4)), v) |> b -> reduce((x, y) -> vcat(x, y), b)
end

function solve_lp(c, n, m1, m2, optimizer, is_direct = false)
    model = is_direct ? direct_model(optimizer()) : Model(optimizer)
    set_silent(model)
    @variable(model, 0 <= x[i = 1:n] <= 5)
    @variable(model, 0 <= y[i = 1:n] <= 3)
    @variable(model, l[i = 1:n] >= 0)
    @constraint(model, vlim[i = 1:n], x[i] + y[i] <= m1[i])
    @constraint(model, ylim[i = 1:n], y[i] <= m2[i])
    @constraint(model, dynamics[i = 2:n], l[i] <= x[i-1] * m1[i-1] + y[i-1] * m2[i-1])
    @objective(model, Max, sum(c .* x))
    optimize!(model)
    return (raw_status(model), objective_value(model))
end

c = [5:0.1:25; 20:-0.1:1]
n = length(c)
m1 = mass([2, 0, 1, 4], n)
m2 = mass([0, 2, 3, 4], n)

# julia> @benchmark solve_lp(c, n, m1, m2, Clp.Optimizer) samples=100
# BenchmarkTools.Trial:
#   memory estimate:  5.06 MiB
#   allocs estimate:  49116
#   --------------
#   minimum time:     5.080 ms (0.00% GC)
#   median time:      5.569 ms (0.00% GC)
#   mean time:        6.523 ms (12.77% GC)
#   maximum time:     14.905 ms (58.96% GC)
#   --------------
#   samples:          100
#   evals/sample:     1

# julia> @benchmark solve_lp(c, n, m1, m2, HiGHS.Optimizer, true) samples=100
# BenchmarkTools.Trial:
#   memory estimate:  2.39 MiB
#   allocs estimate:  36199
#   --------------
#   minimum time:     16.563 ms (0.00% GC)
#   median time:      17.235 ms (0.00% GC)
#   mean time:        18.275 ms (3.71% GC)
#   maximum time:     36.950 ms (38.83% GC)
#   --------------
#   samples:          100
#   evals/sample:     1

# julia> @benchmark solve_lp(c, n, m1, m2, HiGHS.Optimizer, false) samples=100
# BenchmarkTools.Trial:
#   memory estimate:  4.29 MiB
#   allocs estimate:  51561
#   --------------
#   minimum time:     18.049 ms (0.00% GC)
#   median time:      19.429 ms (0.00% GC)
#   mean time:        20.539 ms (3.87% GC)
#   maximum time:     33.027 ms (32.58% GC)
#   --------------
#   samples:          100
#   evals/sample:     1

If you look up the solve time of the problem, it's about the same as Clp:

julia> solve_time(model)
0.0040657949866726995

so this is likely because HiGHS.jl builds the model row-by-row, whereas Clp does a single copy_to operation. This suggests we should add a fast copy to HiGHS as well.

We can probably just copy the Clp code: https://github.com/jump-dev/Clp.jl/blob/12542b098a901de55e44587966c11b563203fcd5/src/MOI_wrapper/MOI_wrapper.jl#L240-L384 with the ccalls swapped as appropriate.

jajhall commented 3 years ago

Interesting! I'm about to allow the "golden copy" of the constraint matrix in HiGHS to be held row-wise rather than column-wise, which will greatly speed up the loading of models row-by-row. This will mean that a transposition operation takes place when models are solved, but this is only O(n). I'll let @odow know when it reaches Highs/master