Open hjroh0315 opened 1 year ago
I don't know about errors in simplex method.
Is it possible to prove that the output of the algorithm always satisfy
The value of $\mathbf{c}^T\mathbf{x}$ must have a relative/absolute error equal to or less than $10^{-6}$ compared with the actual optimal answer.
?
In my intuition, it seems to be very difficult to just determine the feasibility or boundedness.
Generally, the solution almost always converges to any optimal answer in a finite number of steps. For the original Simplex Algorithm there are cases which the solution cycles around the optimal answer and never converges, but there are ways to intervent this issue. For the simplex algorithm without randomization it takes $O(2^N)$, while with randomization it takes polynomial time complexity w.h.p.
Besides, the solution always has rational components if the coefficients given in the problem are rational. Or we could be much more generous about the error and allow an error up to $10^{-3}$.
For the thought that it is difficult enough to just determine the feasibility/boundedness, I do agree. If other prople agree with this, I can change the proposal to "determine the feasibility/boundedness".
If the feasible region is very, very small, we need to distinguish between the small region and the empty set. Therefore, even if we can calculate all real numbers with high accuracy, I think it would still be unsolvable.
Furthermore, I think it is not always possible to calculate related real numbers within the allowed error. Subtracting two numbers that are very close in scale can lead to round-off errors. Division involving such numbers can generate significant absolute errors.
I understand that the answer is a rational number, so using rational numbers may resolve these concerns. However, can we bound the size of its numerator and denominator?Would 64-bit or 128-bit be sufficient? I think multi-precision integers are required, although I am unsure of the specific length limitations for the integers.
I think it is a good idea to include an LP problem in the library-checker, but as mentioned above, there are many obstacles to overcome. Is there a way to eliminate these obstacles?
Do we have a problem which is a direct usage of Linear Programmng? In graphs, it's usually network flow/MST/SSSP, but better algorithms are known in this field. In geometry, I think the Chevyshev Center (though most people abuse halfplane intersection for this), and the Linear Separability problem (again, people use hull-hull intersection for this) would be a good usage of Linear Programming. Do we have a problem which is a direct usage of LP, while yet it does not have a solution other than interpreting it as an LP problem (Or rather, solving it as an LP problem is much more efficient)? It would be very helpful if we can find one.
Here is a short description of the task I was preparing recently, which requires interpretation as LP.
You must assign real values in the range $[0,1]$, to each empty cell of a $N \times N$ grid, where initially some cells have fixed values. The board should be subject to: $\text{(row sum)}=1$, $\text{(column sum)}=1$, $\text{(diagonal sum)} \le 1$. Basically, it's an N-Queen problem where the integer constraint does not exist. So, it translates to a feasibility problem, i.e. maximize $0$ s.t. (constraints explained just before, translates to $8N-2$ inequalities). Still, I do not know whether this problem does not suffer with the forementioned issues, or is a better fit on a "library checker" platform.
In this problem, I believe it is impossible to put a problem with such a large constraint as $NM \leq 500000$ on Library Checker. I think it's better for individuals to tune the execution speed on their own.
I think it is possible to set a problems with sufficiently small constraints (e.g., $N=10$ or $N=100$). I haven't verified it thoroughly, though.
I think the judge solution will need to be prepared using rational numbers.
In practice, it is desirable for participants to be able to provide accurate answers using the double type. Therefore, I believe some additional constraints are necessary.
I know a much faster solution is available for very low numbers of variables, such as $N \le 4$. Basically, low-dimensional linear programming is possible in $O(N^2M+N^4\sqrt{M}\log{M}+N^{O(N)}\log{M})$ expected TC, and this is viable for usage in many linear programming problems appearing in computational geometry. The algorithm appeared on Clarkson, Kenneth L. (1995), "Las Vegas algorithms for linear and integer programming when the dimension is small". I cannot yet ensure numerical stability for an implementation using floating-point types, but an implementation with bigints is provided by dacin21 on https://github.com/dacin21/dacin21_codebook/tree/master/lp. This is not very useful in the current constraints, but I think if we must change the task, I believe this is the way to go.
Problem
Given a matrix $A$ and two vectors $\mathbf{b}$ and $\mathbf{c}$, output $\mathbf{x}$, a solution vector of the following linear program.
max $\mathbf{c}^T\mathbf{x}$ s.t. $A\mathbf{x} \le \mathbf{b}$, $\mathbf{x} \ge 0$
Constraint
Solution / Reference
Input
Output
Note / Disucussion