Closed RussTedrake closed 5 years ago
FYI, @frankpermenter, a related Slack discussion: https://drakedevelopers.slack.com/archives/C3L92BM2Q/p1553473456007900
The basis selection algorithm, as designed, is deterministic, i.e., it should always produce the same output given the same input. (This is despite the call to RandomSupportingHyperplanes().) So, I'll need more information to debug this. Can you post a complete example that exhibits randomness?
from __future__ import print_function
import numpy as np
import meshcat
from matplotlib import cm
import matplotlib.pyplot as plt
from pydrake.solvers import mathematicalprogram as mp
import pydrake.symbolic as sym
from pydrake.solvers.mosek import MosekSolver
# constants and geometry
mg = 10.
l = 0.5
mu = 0.2
f_max = 0.5*mg
def CalcTau(s, c):
return mg*np.array([s, c, 0])
def Cross2D(u,v):
assert u.size == 2
assert v.size == 2
return u[0]*v[1] - u[1]*v[0]
def CalcG(s, c, x1, x2):
n1 = np.array([1, 0])
n2 = np.array([-1, 0])
n3 = np.array([s, c])
N = np.vstack((n1, n2, n3)).T
t1 = np.array([0, 1])
t2 = np.array([0, 1])
t3 = np.array([-c, s])
T = np.vstack((t1, t2, t3)).T
r1 = np.array([-l, x1 - l])
r2 = np.array([l, x2 - l])
r3 = np.array([-l, -l])
R = np.vstack((r1, r2, r3))
V = np.hstack((N + mu*T, N - mu*T))
G = np.vstack((V, np.zeros(6)))
for i in range(3):
G[2, i] = Cross2D(R[i], G[0:2, i])
G[2, i+3] = Cross2D(R[i], G[0:2, i+3])
return G / np.sqrt(1+mu**2)
x1 = 0.6
x2 = 0.6
prog = mp.MathematicalProgram()
s = prog.NewIndeterminates(1, "s")[0]
c = prog.NewIndeterminates(1, "c")[0]
f = prog.NewIndeterminates(6, "f")
n = 1 # total degree of lagrange multiplier polynomials
monomial_basis = sym.MonomialBasis(np.hstack((f, s, c)), n)
rho = prog.NewContinuousVariables(1, "rho")[0]
variables = sym.Variables(np.hstack((f, s, c)))
p = sym.Polynomial(s - rho, variables)
lambda_sl, Q_sl = prog.NewSosPolynomial(monomial_basis) # sin lower bound
lambda_cl, Q_cl = prog.NewSosPolynomial(monomial_basis) # cos lower bound
p -= lambda_sl * sym.Polynomial(s)
p -= lambda_cl * sym.Polynomial(c)
# s^2 + c^2 = 1
lambda_sc = prog.NewFreePolynomial(variables, 4)
p += sym.Polynomial(s**2 + c**2 - 1)*lambda_sc
# force balance constraint
G = CalcG(s, c, x1, x2)
tau = CalcTau(s, c)
Gf = G.dot(f)
for i in range(3):
lambda_balance = prog.NewFreePolynomial(variables, 4)
p += sym.Polynomial(Gf[i] - tau[i])*lambda_balance
# bounding box constraint on f
for fi in f:
lambda_fl, Ql = prog.NewSosPolynomial(monomial_basis) # f >= 0
p -= lambda_fl * sym.Polynomial(fi)
# upper bound on contact forces
for i in range(2):
lambda_fu, Qu = prog.NewSosPolynomial(monomial_basis)
p -= lambda_fu * sym.Polynomial(f_max - (f[i] + f[i+3])/np.sqrt(1+mu**2))
# s0, _ = prog.NewSosPolynomial(sym.MonomialBasis(np.hstack((f, s, c)), 3))
# for k, v in s0.monomial_to_coefficient_map().iteritems():
# if p.monomial_to_coefficient_map().has_key(k):
# prog.AddLinearConstraint(p.monomial_to_coefficient_map()[k] == v)
# else:
# prog.AddLinearConstraint(v == 0)
prog.AddSosConstraint(p)
prog.AddLinearCost(-rho)
solver = MosekSolver()
solver.set_stream_logging(True, "")
result = solver.Solve(prog)
print(result)
Running this snippet using Drake from a few weeks ago, the result is either feasible but wrong or kUnkownError
. @RussTedrake This is the indeterministic behavior I was referring to in the presentation on Monday.
I can reproduce @pangtao22 's observation, that the mosek solver gives different results.
I can confirm that sos_basis_generator
gives the same result.
I actually asked Mosek to print out its data, by calling Mosek's function MSK_writedata
. The output data is the same. In the next line I call Mosek's optimizer MSK_optimizetrm
, and the result is different.
I also tried to solve the same problem with SCS for 15 times, they all generate the same solution.
I think these strongly suggest that MosekSolver in MathematicalProgram is not deterministic. I sent my problem data to Mosek, and ask them if they have any insights. Will post their answer on this issue later.
It turns out the problem is actually in MathematicalProgram::NewSosPolynomial
, which creates a symbolic::Polynomial
object. Inside symbolic::Polynomial
, it stores an unordered map from the monomial to the coefficient. This unordered map provides no guarantee on the ordering of the pair (monomial, coefficient)
. When we impose constraint on the coefficient of this symbolic polynomial, we loop through this unordered map, and then impose the constraint, hence the order of the constraints are different from each run. Apparently Mosek/SCS are sensitive to the constraint order for our particular problem, and they generate inconsistent results in each run.
Also cc @soonho-tri
@hongkai-dai , PTAL https://github.com/soonho-tri/drake/commits/pr-use-std-map-in-polynomial .
By applying @soonho-tri 's patch that uses std::map instead of std::unordered_map, we now get consistent results in @pangtao22 's test programs. This validates the hypothesis that the inconsistency is caused by the order of the constraint.
This makes me concerned about the solver performance. For the same problem, if by just changing the order of the constraint, we end up with drastically different results in Mosek/SCS, how can we say definitively whether our original problem is solvable or not? Why our SOS problem has such sensitive numerics?
Wow. I am similarly concerned.
But great find. Thank you for debugging!
I noticed that I obtain random results when running SCS and/or Mosek on a simple SOS optimization. @hongkai-dai found the source here: https://github.com/RobotLocomotion/drake/blob/580f7007136672b566a975a38de46c0fb1007ce5/solvers/sos_basis_generator.cc#L133-L147
This means that I cannot get repeatable results if I want to e.g. use this inside a system during simulation. (it is also hard to debug an optimization!)
I think we need to either (1) replace the algorithm (@frankpermenter already has one in mind, i believe), or, less ideal, (2) create an API hook so that we can set the random generator from the one owned by the simulation framework.
cc @ggould-tri