Closed mn-on-github closed 4 years ago
Interesting. I confirmed that this still happens in the latest version of SciPy. Revised simplex solves this one if you reduce the tolerance:
res = linprog(c, A_eq=matA_in, b_eq=vecb_in,bounds=xrange, method='revised simplex', options={"tol":1e-5})
I'll try to check whether gh-12043 fixes it at some point.
Note that in the example case here, as written, ref
and matA
do not satisfy the property exactly (there's 1e-6 difference). If I correct for that,
for j in range(matA.shape[0]):
matA[j,-1] = matA[j,:-1].sum()
ref[-1] = ref[:-1].sum()
a solution is obtained in Scipy 1.5.2. But maybe the property holds close to FP precision in the real example case. The report above says version 1.2.1 was used --- it would be useful to know if the problem still occurs in newer versions, given that IIRC fixes to the redundancy handling were done since then.
Even the HiGHS solvers are reporting infeasibility on this one. @jajhall @mckib2 thought you might be interested in this.
it would be useful to know if the problem still occurs in newer versions, given that IIRC fixes to the redundancy handling were done since then.
Yes, I think it does still occur, and the new RR methods in gh-11249 don't fare any better. I can't dig much deeper right now.
@mdhaber: did you correct for the rounding error in the ref/matA matrices as typed --- it works for me if that's done? Or, is the problem expected to be feasible even if this is not done?
Well, ideally it would solve the problem even if that's not done, because even with the matrix as typed, the @mn-on-github's ans_x
produces a small residual matA_in@ans_x-ref
.
But you're right, this seems to be a tolerance issue, because when you correct the rounding error as described, redundancy removal kicks in and the problem is solved successfully.
One thought for a while has been to expose the redundancy detection tolerance as an option. That might help here.
Even the HiGHS solvers are reporting infeasibility on this one. @jajhall @mckib2 thought you might be interested in this.
it would be useful to know if the problem still occurs in newer versions, given that IIRC fixes to the redundancy handling were done since then.
Yes, I think it does still occur, and the new RR methods in gh-11249 don't fare any better. I can't dig much deeper right now.
I'd be interested in looking at this problem. Do you have it as a .lp or .mps file?
@jajhall I can make an MPS file later this evening for HiGHS testing
@jajhall Here is an MPS file generated for the above file. HiGHS reports infeasible but I'm not sure if tolerances could be adjusted to resolve it as above.
I appreciate for quick replies @mdhaber @jajhall. It is interesting to see that the analysis is going in a decentralized way by people living in a different region. The Earth is indeed round.
@pv The number of digits in matA, ref, and ans_x is six, which is due to a default config of double-to-string conversion using std::stringstream when generating a python script. I will change 6->18 digits and check if at least my specific issue is solved.
@mckib2 My code using IPOPT takes full double-precision numbers and fails in a similar setting (but less frequently, and if passing a much larger matA). Incorrect convergence result is a popular trouble when trying to numerically solving a SAT of linear formula. But I felt it is worth reporting this because the convergence error is notably large.
@jajhall Here is an MPS file generated for the above file. HiGHS reports infeasible but I'm not sure if tolerances could be adjusted to resolve it as above.
Thanks, but this LP is infeasible, and not marginally so, as all columns are fixed at zero, and the constraints are equations with nonzero RHS [and nonzero coefficients]. If the columns are allowed to be [0, inf] then it's still infeasible. With free columns HiGHS finds an optimal (ie feasible) solution. Perhaps with more digits the optimal basis matrix is closer to being singular, and that causes trouble, but there's no issue with the 6-digit coefficients. @mn-on-github @pv @mckib2 @mdhaber
@jajhall @pv You are right and I appreciate for your assessment. It seems that deprecated digits is a major root-cause of at least this case that I posted. As I extend digits of coefficients in ref, matA, at least the posted case becomes SAT.
#test_18digits.py
import codecs
from scipy.optimize import linprog
import numpy as np
c=[0,0,0,0,0,0,0,0,0]
ref1 = np.array(
[0.917112674839566, 0.270875554297848, -0.395954915896638, -0.196866948474585, 0.404833635233814, 1])
matA1 = np.array([
[0.656012754822684, -0.365063469865795, -0.983575650131013, 1.11294402541604, 0.579682339758087, 1],
[0.799692868236245, 0.500389828428333, -1.32220518047304, 0.327986313956984, 0.694136169851478, 1],
[1.54842447256547, -0.321543695497237, -1.81128355027017, 0.949595672291004, 0.634807100910929, 1],
[1.12976562961828, -0.271752189122717, -1.74753541966573, 1.24775274269162, 0.641769236478556, 1],
[1.05679123869119, 0.506714186082705, -0.583985819768568,-0.627242768591209, 0.647723163585883, 1],
[1,0,0,0,0,1],
[0,1,0,0,0,1],
[0,0,0,1,0,1],
[0,0,0,0,1,1]
])
matA1_in=matA1.T.tolist()
vecb1_in=ref1.tolist()
xrange = [
[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1]
]
res1 = linprog(c, A_eq=matA1_in, b_eq=vecb1_in,bounds=xrange, method='interior-point')
logfile=codecs.open('pyout.log','w','utf-8')
print(res1, file=logfile)
if res1.status == 2:
print("0UNSAT",file=codecs.open('check_result.log','w','utf-8'))
else:
print("0UNSAT",file=codecs.open('check_result.log','w','utf-8'))
# Now the answer is SAT and
ans_x1=np.array([0.04934082244280857, 0, 0, 0.00702774001205814, 0.5738893795845879, 0.27032349854121018, 0, 0.09941855941933597, 0])
residual1 = np.dot(matA1_in, ans_x1)-ref1
print("residual1 = ",file=logfile)
print(residual1,file=logfile)
# residual1 = [-5.55111512e-16 -8.88178420e-16 -4.44089210e-16 -4.99600361e-16 -1.16573418e-15 6.66133815e-16]
# test_18digits.py end
Now I get a SATisfiable assignment successfully in this case and some other similar cases.
con: array([-4.86871132e-09, -7.18150595e-10, 5.58795804e-09, -3.88485341e-09,
-3.50206630e-09, -7.38582839e-09])
fun: 0.0
message: 'Optimization terminated successfully.'
nit: 4
slack: array([], dtype=float64)
status: 0
success: True
x: array([0.01162768, 0.03035119, 0.04169053, 0.00607038, 0.44224728,
0.34643735, 0.05089497, 0.01047025, 0.06021038])
Thanks to this, I got peace of mind and I resumed combining scipy to my main algo and re-testing further. (But I am caught by a different assertion error in my algo...) It would take some time until I verify other cases having a larger matrix size which even IPOPT sometimes fails.
Q. As long as an interior point method (for at least LP) is globally convergent toward either a satisfiable assignment or a point close to a satisfiable domain (by internally minimizing the max. of constraint violations), the constraint violations should be as small as O(1e-7) or so. Doesn't scipy.linprog with method='interior-point' guarantee the global convergence if the LP is infeasible?
I set an assertion to the constraint violations for validating an output x from scipy. In this case, the final search iterate stopped at a point x = array([0.70514251, 0.10725385...]) where constraint violations is larger than it should be supposedly; con = array([-1.69741633, -0.25037501,...]). I don't use simplex method because this property does not hold when the LP is infeasible.
p.s. When to close this "Issue#12954"..? Is there any button to reopen this if a similar trouble reemerges?
Re:global convergence, please see here for the algorithm on which the interior point implementation is based.
Re: constraint violations larger than they should be, can you post that example? linprog
in current SciPy shouldn't do this without reporting an error status, but I don't remember about SciPy 1.2.
You can close the issue, and GitHub will allow you to reopen if need be.
Good morning? I learned MOSEK is inside scipy.linprog and included in GAMS solver library..
Two good properties of an interior point method using a variety of penalty/barrier functions is that (1) it can start from an initially infeasible point and that (2) drives the search iterate x toward a final point where KKT conditions hold. The property (2) drives x toward minimizing primal feasibility of equality constraints and max|| {(A_eq*x-vec_b}_i || decreases even the LP is infeasible (and go to 0 if feasible). FYI: IPOPT uses a logarithmic barrier function. cepac.cheme.cmu.edu/pasilectures/biegler/ipopt.pdf (Eq-5)
Perhaps I may be addicted with a particular primal-dual IP, but I expect that the small constraint violation || residual || = O(9.9293934e-7) in the original post where residual = [-9.9293934e-07, -5.2687482e-07, ...] means that an actual satisfiable assignment is close to the final search iterate ans_x = [0.04934082, 0, 0, 0.00702774...] with a bound || matA_inans_x-veb_in || <O(9.9293934e-7)~=O(1e-6). So I can judge a precise satisfiable assignment is near ans_x within a small bound inv(A_eq)(1e-6...1e-6) (abusively using!) Eventually, ans_x1 = [0.04934082244280857, 0, 0, 0.00702774001205814...]) is close to the original ans_x.
But I got an error message from scipy with a final search iterate x = [0.70514251, 0.10725385...] that is nowhere close to a potential satisfiable assignment in a sense that the constraint violations con = matA_in*x-vecb_in = [-1.69741633, -0.25037501, ...] and max|| con[i] ||= 2.57498303 is not small at all. This motivated me to post here. Unless the property (2) holds, the final search iterate does not mean anything about the degree of primal infeasibility even in a case where the primal infeasibility is small and a subtle numerical error could have triggered SAT->UNSAT decision. Now we know that matA has 6-digits coefficients and possibly the constraint violation may not go far smaller than O(1e-7). Still, an IP solver should report a final search iterate s.t. max|| con[i] ||=O(1e-7).
The next one is this. I am not abusively using scipy.optimize.linprog, but...
# test2.py
import codecs
from scipy.optimize import linprog
import numpy as np
c = [
0,0,0,0,0]
ref = np.array([2.36046183024986, -34.8648995477781, 26.5532547806202, 33.667557368757, -26.716374431849,1])
matA=np.array([
[10.9173968750744, -166.472085847037, 125.593681719784, 158.890054572734, -127.929047320556,1],
[1, 0, 0, 0, 0, 1],
[0, 0, 1, 0, 0, 1],
[0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 1, 1],
])
matA_in = matA.T.tolist()
vecb_in = ref.tolist()
xrange = [
[0,1],[0,1],[0,1],[0,1],[0,1]]
res = linprog(c,A_eq=matA_in, b_eq=vecb_in, bounds=xrange, method='interior-point')
logfile=codecs.open('pyout.log','w','utf-8')
print(res, file=logfile)
if res.status == 2:
print("UNSAT",file=codecs.open('check_result.log','w','utf-8'))
else:
print("SAT",file=codecs.open('check_result.log','w','utf-8'))
# scipy returns UNSAT, but the answer is SAT and
ans_x=np.array([0.209433908215782, 0.07398873516026, 0.249679170836811, 0.390592262970425, 0.07630592281674])
# an evidence is that
residual = np.dot(matA_in, ans_x)-vecb_in
print("residual = ",file=logfile)
print(residual,file=logfile)
# residual = np.array([-2.22044605e-15, -2.84217094e-14, -1.06581410e-14, -1.42108547e-14, -1.77635684e-14, 1.79856130e-14])
# test2.py end
scipy.optimize.linprog reports the following UNSAT decision.
con: array([0.07398874, 0. , 0.24967917, 0.39059226, 0.07630592,
0.79056609])
fun: 0.0
message: 'There is a linear combination of rows of A_eq that results in zero, suggesting a redundant constraint. However the same linear combination of b_eq is nonzero, suggesting that the constraints conflict and the problem is infeasible.'
nit: 0
slack: array([], dtype=float64)
status: 2
success: False
x: array([0.20943391, 0. , 0. , 0. , 0. ])
The same as the original environment.
>> import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
1.2.1 1.17.4 sys.version_info(major=3, minor=7, micro=7, releaselevel='final', serial=0)
Is this UNSAT decision due to a conservative filtering at presolve step, perhaps?
Yes, it's presolve. I'm not sure why it's erroneously detecting infeasibility; that tolerance is actually pretty loose. I can't dig into it right now, but if you turn presolve off it solves (despite warnings).
@mdhaber Then I will continue with presolve disabled. I know well that haste debugging of a numerical algorithm often ends up in adding a new bug. Now I learn why a series of computational reproducibility workshop at Super Computing conferences attracts developers. How hard it is to write a library code that must take whatever minor inputs and work correctly!
The third one is this. scipy.linprog reports [ValueError] and my algo stalled.
# test3.py
import codecs
from scipy.optimize import linprog
import numpy as np
c = [
0,0,0,0,0,0,0,0,0,0,0,0,0]
ref = np.array([
-0.181114042864394, -0.0986934423627512, 0.0926324349335375, 0.4414840443578, 0.745691005935808, 1
])
matA = np.array([
[-0.544843623980821, -0.129457661609659, 0.0292020313838515, 0.670992016634673, 0.974107237571955, 1],
[-0.149268744455766, -0.11599905475264, -0.298316695589, 1.49561950249148, 0.0679649923059217, 1],
[-0.00199673139556178, 0.0358758752825199, 0.745620155655796, -0.74224371858081, 0.962744419038056, 1],
[-0.142694285536776, -0.039119685519564, -0.457914024006347, 0.898777190442189, 0.740950804620498, 1],
[-0.273643891081474, -0.273549950317399, 0.590763051781555, 0.7619983137542, 0.194432475863119, 1],
[-0.369197297154071, 0.110751316915769, -0.487181692608439, 0.952173818715747, 0.793453854130994, 1],
[-0.62094951493276, 0.0179490559586735, 0.0593931913110011, 0.589327295376015, 0.954279972287071, 1],
[-0.186469051147906, 0.498803999976586, 0.189599429572418, -0.471826032876353, 0.969891654475256, 1],
[-0.416763850824591, 1.0459812609809, -0.928079976725902, 1.02061935762679, 0.278243208942802, 1],
[0, 1, 0, 0, 0, 1],
[0, 0, 1, 0, 0, 1],
[0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 1, 1]
])
matA_in = matA.T.tolist()
vecb_in = ref.tolist()
xrange = [
[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1]
]
res = linprog(c,A_eq=matA_in, b_eq=vecb_in, bounds=xrange, method='interior-point')
print(res,file=codecs.open('py_out.log','w','utf-8'))
if res.status == 2:
print("UNSAT",file=codecs.open('check_result.log','w','utf-8'))
else:
print("SAT",file=codecs.open('check_result.log','w','utf-8'))
# test3.py end
scipy.optimize.linprog reports the following broken report.
/usr/lib64/python3.7/site-packages/scipy/optimize/_linprog_util.py:704: OptimizeWarning: A_eq does not appear to be of full row rank. To improve performance, check the problem formulation for redundant equality constraints.
warn(redundancy_warning, OptimizeWarning)
Traceback (most recent call last):
File "test3.py", line 31, in <module>
res = linprog(c,A_eq=matA_in, b_eq=vecb_in, bounds=xrange, method='interior-point')
File "/usr/lib64/python3.7/site-packages/scipy/optimize/_linprog.py", line 478, in linprog
complete, undo, status, message, tol, iteration, disp)
File "/usr/lib64/python3.7/site-packages/scipy/optimize/_linprog_util.py", line 1322, in _postprocess
lb, ub, tol, message
File "/usr/lib64/python3.7/site-packages/scipy/optimize/_linprog_util.py", line 1235, in _check_result
raise ValueError(message)
ValueError: The algorithm terminated successfully and determined that the problem is infeasible.
@mdhaber What do you think the output mean?
You are still using 1.2? I get:
con: array([ 5.03563101e-05, -4.47545081e-05, -6.71278946e-06, -1.14734367e-04,
-1.23698210e-04, -2.39543565e-04])
fun: 0.0
message: 'The solution is feasible, but the solver did not report that the solution was optimal. Please try a different method.'
nit: 8
slack: array([], dtype=float64)
status: 4
success: False
x: array([1.61717941e-01, 1.15004723e-04, 4.57628500e-05, 1.47362232e-01,
2.63036938e-01, 2.68474130e-05, 2.68003321e-05, 8.74335192e-06,
4.24396778e-06, 4.88798067e-06, 1.73384726e-05, 2.82160342e-05,
4.27844588e-01])
The branch of the code this goes into was created based on observed behavior of the simplex method. There is a condition after the solver terminates:
elif status == 2 and is_feasible:
# Occurs if the simplex method exits after phase one with a very
# nearly basic feasible solution. Postsolving can make the solution
# basic, however, this solution is NOT optimal
status = 4
message = ("The solution is feasible, but the solver did not report "
"that the solution was optimal. Please try a different "
"method.")
Apparently, it occurs for all three methods on this problem, not just simplex.
status == 2
means that the solver terminated successfully and found the problem to be infeasible. is_feasible
is a separate sanity check to see whether the solution is feasible:
invalid_bounds = (x < bounds[:, 0] - tol).any() or (x > bounds[:, 1] + tol).any()
invalid_slack = status != 3 and (slack < -tol).any()
invalid_con = status != 3 and (np.abs(con) > tol).any()
is_feasible = not (invalid_bounds or invalid_slack or invalid_con)
but it has a relatively loose tolerance of 0.00031622776601683794 (np.sqrt(1e-9) * 10).
Since all three solvers - and actually both of the HiGHS solvers - conclude that the problem is infeasible (status == 2
), I would say that this problem is infeasible by any reasonable definition. However, it is nearly feasible, as the solvers all produce solutions that satisfy the constraints with looser tolerances.
The tolerance it uses to check feasibility has been tuned over time, but mostly to check for a different case - when the solver reports an optimal solution that turns to be infeasible. I suppose we could use a different tolerance for this case so you get a more conclusive result, but choosing the perfect level for infeasibility edge-cases like this is tricky. So perhaps the best thing to do here would be to clarify the output message.
f"The solver did not report that the solution was optimal, but the residuals are less than {tol}. Please try a different method. If you continue to receive this message, the problem is nearly, but not quite, feasible."
Or maybe
... is not feasible, but nearly so.
The solution returned is not likely to be "nearly optimal", though, at least not for the simplex solvers, since they didn't even make it into phase 2. Maybe I should mention that, too?
Suggestions?
@jajhall My apologies, I generated the wrong MPS file for the problem described. The bounds should not be fixed -- they should be upper bounded. This was due to a bug in the bounds processing for scikit-glpk that was recently corrected. I'll upload the corrected MPS file although it still turns out to be infeasible. sat_infeasible_corrected.mps.txt
@jajhall My apologies, I generated the wrong MPS file for the problem described. The bounds should not be fixed -- they should be upper bounded. This was due to a bug in the bounds processing for scikit-glpk that was recently corrected. I'll upload the corrected MPS file although it still turns out to be infeasible. sat_infeasible_corrected.mps.txt
Thanks: I did actually try that fix, and recall that the LP needs some negative values to be feasible.
Interesting problems - I've never seen them before.
@mdhaber Then I will continue with presolve disabled. I know well that haste debugging of a numerical algorithm often ends up in adding a new bug. Now I learn why a series of computational reproducibility workshop at Super Computing conferences attracts developers. How hard it is to write a library code that must take whatever minor inputs and work correctly!
Hi @mn-on-github
Indeed, it's hard, but it's one of the goals of HiGHS!
Thanks for your recent near-feasible LP. I've not seen your class of LP before. If the LP is classed as infeasible, are you still interested in the "best" point that the solver can find, or just the statement of infeasibility [to within the tolerance used]?
I'm developing the HiGHS solvers and am keen to understand what users want.
Currently in HiGHS, if an LP is found to be infeasible, no point is returned. This is because the main solver is dual simplex, so identifies primal infeasibility as a consequence of dual unboundedness. There is an associated (primal) solution, but it isn't "best" in any clear sense. However, it has an interior point (IPM) solver [that I didn't write] that should identify a "best" infeasible (primal) point, and I'm currently developing a primal simplex solver that will also determine a "best" infeasible (primal) point, so these could be returned. Note that "best" is not the same in the two cases. Sorry if this is something you're aware of (or TMI), but the IPM solver yields a point that is feasible with respect to bounds on variables, but will have minimized the constraint violations. [I get the impression that this is what you're after.] For an LP with n variables and m constraints, primal simplex yields a point for which n of the variable and constraints will satisfy their bounds, but (up to) m will be violated in a sense that is minimal, but not so clear. Neither solution method will find a point for which violations of variable and constraint bounds are minimized - arguably the "best" solution to the problem.
In passing, I see that you've had problems with presolve declaring infeasibility incorrectly. Apparently this is due to an equation being identified as being linearly dependent on other rows, but inconsistent. Given your matrix coefficients, I can see that this could be difficult to asses, due to inexact arithmetic. I guess that this presolve technique is used to make life easier for an IMP solver. However, the HiGHS presolve is designed as a precursor to a simplex solver - that isn't troubled by near-deficient equations - so the presolve doesn't try to identify them. It would seem as if the HiGHS IMP solver isn't troubled by them, either. This is interesting, and something that I'll check up on.
The HiGHS IPM solver has some unusual features that make it particularly reliable and accurate. If you're looking for a reliable IPM solver for your problems, HiGHS might well be what you need. [It's got a direct Python interface.]
Julian
@mdhaber I use the same python version. As long as I can avoid the error by configuring whatever reasonable options, it is OK. Can I selectively assign tol for constraint violation "con" and bound constraint xrange?
As column vectors of matA_in are linearly dependent, I want to test satisfiability of the bound constraint 0-1e-5<ans_x[i]<=1+1e-5 while keeping |matA_in*ans_x-vecb_in|<1e-12 (strictly satisfied).
@jajhall Simultaneous submission! I wrote a horribly complex numerical algorithm and suffering debugging. So I use scipy.linprog as an external library built in my "4th" verification toolchain. If HiGHS is more reliable (will be challenged when matA gets much larger!) I will add HiGHS to my "5th" verification toolchain. IPOPT is built in my "3rd" verification toolchain. They (and my algo) reported different decision results. Which one to trust...
Although I cannot explain anything about my specific algo that generates the particular matA, I can elaborate many applications that root in the single motivation.
0) technical motivation I want a LP solver report the final search iterate that is close to a feasible domain (or somewhere primal infeasibility is suppressed), because the final infeasible search iterate gives a clue for determining conflicting subset of constraints. Some in the SAT solving community call the conflicting pairs (minimal) UNSATisfiable core. UNSAT core is used in a process of CDCL (Conflict-Driven Clause Learning) to refine a search domain. Note that a single LP of inequality constraints forms a convex partition.
1) Applications This LP solver is used for numerically testing satisfiability of a propositional logic formula that consists of linear constraints. Application domains are diverse.
a) Formal verification of a numerical program, Finding a bug of a deep neural network aaai.org/Symposia/Spring/sss19symposia.php#ss09 This problem is reduced to computing a bad input that satisfies a condition regarding bad output, subject to a constraint regarding a valid input.
A generalized setting is known as model checking. For this I use NLP solver. dl.acm.org/doi/10.1145/2970276.2970319
b) A component of a general-purpose satisfiability solver LP solver is built in SAT solving process using CDCL. dl.acm.org/doi/10.1145/3049797.3049819
c) (traditional AI) scheduling problem A traditional task scheduling problem reduces to solving satisfiability of a logically complex (conjunction, disjunction, and negation) combination of linear constraints on task[i].start_time and task[i].end_time. If task[i] and task[j] are mutually exclusive, a propositional formula is {task[i].end_time<task[j].start_time || task[j].end_time<task[i].start_time) && (task[i].start_time<task[i].end_time)&&(task[j].start_time< task_end_time). This is NP-hard, but some SMT solvers, such as Microsoft Z3, attempt to overcome the complexity by applying CDCL to overcome the complexity.
d) reasoning/estimation problem Reasoning problem in general sense is to estimate missing state variables using a model that consists of undetermined parameters and their constraints. A non-trivial application is that state estimation problem using a linear model (such as linear Kalman filter and Luenburger observer in control theory) reduces to numerical satisfiability of equality constraints and bound constraints on an acceptable estimation error. The state variables which I want to estimate could be localization of a GPS-guided vehicle.
e) fault handling procedure If I set a too small bound on the acceptable estimation error (to improve estimation accuracy) and the propositional logic formula becomes UNSAT, it implies that we cannot get such a good estimate. I need to set the bound larger. In a noisy environment where sensor data often corrupts, existing linear state estimators get unstable. If a state estimator employing SAT solving gets an UNSAT decision (meaning model/data inconsisntency) with a great primal infeasibility, a fault handling procedure is called and reject the corrupt sensor data. This FDIR procedure is mandatory for every safety-critical system. To let an automated fault diagnosis process identify which sensor data is likely faulty, the final search iterates is needed. A recovery process is just discarding the suspectful sensor data and test if the original formula becomes SAT or not.
f) model-based estimation problem In the field of auronomous driving, sensors collect diverse aspects of the operating environment . A developer designs a data fusion algorithm that computes a good estimate of a world state using a world model. The world model is often built from statistical model identification. The world state is what he wants to reason from actual data stream from sensors.
2) Variations
As long as all linear constraints are combined in a single conjunctive form, a single LP solving gives SAT/UNSAT. But in reality, the linear constraints are conditional;
if(linearFormulaA) then (linearFormulaB)
is equal to satisfiability of a disjunction of linear constraints
not(linearFormulaA) || linearFormulaB.
Often a transformation DNF(Disjunctive Normal Form) -> CNF(Conjunctive Normal Form) gives a standard input format of a SAT solver.
3) Precision and reliability matters Precision and correctness of the SAT solver is essential, because a user wants to invalidate an adversarial claim that his DNN is faulty by giving an evidence that a generated propositional formula is UNSAT. It is easy to validate a SAT decision as the solver reports an actual satisfiable assignment. Otherwise if a SAT solver incorrectly reports UNSAT and the decision is incorrect, nobody can confirm if the propositional formula is indeed UNSAT. So the stated global convergens is a basic requirement of the SAT solver to ensure that the SAT solver reports one satisfiable assignment if there exists at least one.
I wrote too much.. but I hope these fascinate developers/readers of this lengthy thread.
I use the same python version
Oops, maybe it was a development version I was testing with? Maybe we changed that? New release will come out later this year, I think.
Under the hood there are only two kinds of constraints, equality and non-negativity, so you can't independently adjust tolerances on bounds vs inequality in the existing interface. That said, linprog is not, by default, very smart about problem scale, so you might be able to get the effect you want by rescaling things. Maybe try reducing the upper bounds on decision variables by a few orders of magnitude and multiplying your constraint matrix coefficients by the same factor?
It also might be worthwhile for you to install a development version of the code so you can modify it.
It also might be worthwhile for you to install a development version of the code so you can modify it.
You would also be able to use HiGHS through it without changing your code. We added HiGHS simplex and IPM as linprog
methods but those aren't in a release yet.
I switched to HiGHS-1.0 (C++ API) and it does not fail in the cases that I reported above. Now the matA grows up >5 rows*800 columns. Excellent! (though nobody know if I could report a bug next month...) I appreciate for @mdhaber, @pv @jajhall and @mckib2 for these fast, sincere and interactive debugging.
Thanks. Sorry the existing linprog
didn't work out for you. I know you switched to C++, but in case you prefer Python, HiGHS will be available through the linprog
interface in the next release of SciPy.
Hi. I appreciate for contributors to scipy and extended communities. I learned the history of scipy on wikipedia and confirmed why this library is one of the highest quality OSS. But unfortunately, I found apparently a bug in scipy.optimize.linprog. As I run the following test.py, the LP solver incorrectly reports that LP is infeasible.
Reproducing code example:
Error message:
scipy.optimize.linprog reports the following UNSAT decision.
Scipy/Numpy/Python version information:
Here is a version info. Perhaps little bit old?
Can you reproduce this? I don't have much time to look into the implementation of this library, but I can generate "MANY" similar class of incorrect reports having the same property orz.
A special property of ref and matA is that all row vectors satisfies the following constraint; [1,1,1,1,1,-1].ref=0 [1,1,1,1,1,-1].matA[i]=0, for any i, 0<=i<9
Here ref and matA[i] are on the same hyper-plane which share the same normal vector [1,1,1,1,1,-1]. So my initial guess is that the bug might result from inadequately solving a matrix equation. If a constraint matrix matA is rank-deficient. A matrix inversion could break a solution to Given_matrix*vecX=Given_vecB. In Eigen3 which I use, there are three QR decompositions with such rank-deficiency in mind. eigen.tuxfamily.org/dox/classEigen_1_1FullPivHouseholderQR.html eigen.tuxfamily.org/dox/classEigen_1_1ColPivHouseholderQR.html eigen.tuxfamily.org/dox/classEigen_1_1HouseholderQR.html
IPOPT also fails to figure out a SATsifiable assignment and instead converge incorrectly in a similar setting. But it happens less frequently than scipy.optimize.linprog showed. Another guess is that it counld result from a difficulty of solving an LP using the current floating point arithmetic IEEE754-FP that causes a rounding-error. A similar technical issue might be in scipy.optimize.linprog/interior-point. But not for sure.
Best regards, Masataka Nishi