sagemath / sage

Main repository of SageMath
https://www.sagemath.org
Other
1.48k stars 488 forks source link

Add Cython wrappers for GLPK's interface glpssx.h (exact rational simplex) #18765

Open mkoeppe opened 9 years ago

mkoeppe commented 9 years ago

Compare with #18764 / #18735.

In this ticket, we would be using GLPK's header file glpssx.h. We would get direct access to rational simplex data. So, in contrast to #18764 + #18735, there would be no need to reconstruct the solution using possibly slow rational matrix computations on the Sage side. The downside is that glpssx.h is not installed and not advertised as a public API; see ​http://lists.gnu.org/archive/html/help-glpk/2007-10/msg00031.htmlhttp://lists.gnu.org/archive/html/help-glpk/2008-06/msg00006.htmlhttp://lists.gnu.org/archive/html/help-glpk/2013-11/msg00019.html

One could make a new MixedIntegerLinearProgram backend that maintains both a standard glp problem (double floats) and a glpssx problem (GMP rationals). First solve the double-float problem using standard glp_ functions; then copy the basis to glpssx and continue there with the exact solver.

CC: @yuan-zhou @nathanncohen @dimpase

Component: numerical

Issue created by migration from https://trac.sagemath.org/ticket/18765

dimpase commented 9 years ago
comment:1

I don't understand how using standard glp_ functions would not lead to loss of precision, rendering subsequent exact computations meaningless. Are you doing to watch for the numerical troubles in the double-float phase?

Further, I don't think using non-public non-documented features is a good idea. Next version would break them, and we'd be stuck with maintaining a fork... Perhaps we have to find a way first to make GLPK folks finally address the public need of making these things public?

mkoeppe commented 9 years ago
comment:2

Replying to @dimpase:

I don't understand how using standard glp_ functions would not lead to loss of precision, rendering subsequent exact computations meaningless. Are you doing to watch for the numerical troubles in the double-float phase?

One just uses double float to navigate to some basis that's hopefully close to an optimal one. Then move to the same basis in the exact problem, and start exact simplex from there. This is always correct, no matter what numerical troubles the double-float phase ran into.

Further, I don't think using non-public non-documented features is a good idea. Next version would break them, and we'd be stuck with maintaining a fork... Perhaps we have to find a way first to make GLPK folks finally address the public need of making these things public?

With this ticket I want to first find out it will be worth it, performance-wise, comparing to other options. If it is, we can look into asking the GLPK developers.

dimpase commented 9 years ago
comment:3

Replying to @mkoeppe:

Replying to @dimpase:

I don't understand how using standard glp_ functions would not lead to loss of precision, rendering subsequent exact computations meaningless. Are you doing to watch for the numerical troubles in the double-float phase?

One just uses double float to navigate to some basis that's hopefully close to an optimal one.

more realistic scenario is "try to use double floats" to navigate to "something that hopefully is close enough to a basis". This is what I learnt while trying to solve LPs which are too hard for floating point simplex (implemented in CPLEX, say). Typical scenario: