RobotLocomotion / drake

Model-based design and verification for robotics.
https://drake.mit.edu
Other
3.27k stars 1.26k forks source link

can we have a "Simplify" method for symbolic::Expression? #8515

Open RussTedrake opened 6 years ago

RussTedrake commented 6 years ago

Sorry for not stream-lining my example, but I was playing with deriving some equations of motion using symbolic::expression (and numpy makes some of this syntax quite gross)

from pydrake.symbolic import (sin, cos, Jacobian, Variable)
import numpy as np

qst = Variable('qst') # Stance angle (relative to vertical)
qh = Variable('qh')  # Hip angle (0 means swing leg is straight up...  pi is default)
vst = Variable('vst') # Stance angular velocity
vh = Variable('vh') # Hip angular velocity

q = np.ndarray((2,1),dtype=object,buffer=np.array([qst,qh]))
v = np.ndarray((2,1),dtype=object,buffer=np.array([vst,vh]))

mh = Variable('mh')
m = Variable('m')
l = Variable('l')
b = Variable('b')
a = l-b;
g = Variable('g')
slope = Variable('slope')

# Kinematics
p_mst = a*np.ndarray((2,1),dtype=object,buffer=np.array([sin(qst), cos(qst)]))
v_mst = Jacobian(p_mst,np.ndarray((1,1),dtype=object,buffer=np.array([qst]))).dot(np.array([vst]))
p_h = l*np.ndarray((2,1),dtype=object,buffer=np.array([sin(qst), cos(qst)]))
v_h = Jacobian(p_h,np.ndarray((1,1),dtype=object,buffer=np.array([qst]))).dot(np.array([vst]))
qsw = qst+qh
p_msw = p_h + b*np.ndarray((2,1),dtype=object,buffer=np.array([sin(qsw),cos(qsw)]))
v_msw = Jacobian(p_msw,np.ndarray((2,1),dtype=object,buffer=np.array([qst,qh]))).dot(np.array([vst,vh]))

# Kinetic and potential energy
T = .5*m*v_mst.dot(v_mst) + .5*mh*v_h.dot(v_h) + .5*m*v_msw.dot(v_msw)
U = m*g*p_mst.item(1) + mh*g*p_h.item(1) + m*g*p_msw.item(1)

# Lagrangian
L = T-U
M = np.ndarray((2,2),dtype=object)
M[0,0] = T.Differentiate(vst).Differentiate(vst)
M[0,1] = T.Differentiate(vst).Differentiate(vh)
M[1,0] = T.Differentiate(vh).Differentiate(vst)
M[1,1] = T.Differentiate(vh).Differentiate(vh)
print(M[0,0])

bias = np.ndarray((2,1),dtype=object)
bias[0] = T.Differentiate(vst).Jacobian(q).dot(v)-L.Differentiate(qst)
bias[1] = T.Differentiate(vh).Jacobian(q).dot(v)-L.Differentiate(qh)
#print(bias)

this currently prints

(0.5 * (mh * (2 * (pow(l, 2) * pow(sin(qst), 2)) + 2 * (pow(l, 2) * pow(cos(qst), 2)))) + 0.5 * (m * (2 * (pow((l - b), 2) * pow(sin(qst), 2)) + 2 * (pow((l - b), 2) * pow(cos(qst), 2)))) + 0.5 * (m * (2 * pow(( - (l * sin(qst)) - (b * sin((qst + qh)))), 2) + 2 * pow(((l * cos(qst)) + (b * cos((qst + qh)))), 2))))

which actually reduces to something like

2m*l^2 + m*a^2 + m*b^2 + 2*m*l*b*cos(qh)

(but it's pretty hard to see!)

soonho-tri commented 6 years ago

I think first to expand the expression and then to apply a trigonometric simplification-rule

∀x. sin²(x) + cos²(x) = 1

would do the job. I will implement the trig-simplification as a separate function and check if this gives us what we want.

soonho-tri commented 6 years ago

FTR: I just checked how sympy is handling this problem. They implemented an algorithm called Fu from this paper.

soonho-tri commented 6 years ago

FYI, this is the output from sympy.trigsimp:

>>> e = (0.5 * (mh * (2 * (pow(l, 2) * pow(sin(qst), 2)) + 2 * (pow(l, 2) * pow(cos(qst), 2)))) 
+ 0.5 * (m * (2 * (pow((l - b), 2) * pow(sin(qst), 2)) + 2 * (pow((l - b), 2) * pow(cos(qst), 2)))) 
+ 0.5 * (m * (2 * pow(( - (l * sin(qst)) - (b * sin((qst + qh)))), 2) + 2 * pow(((l * cos(qst)) 
+ (b * cos((qst + qh)))), 2))))

>>> sympy.trigsimp(e)
1.0*l**2*mh + 1.0*m*(b - l)**2 + 1.0*m*(b**2 + 2*b*l*cos(qh) + l**2)
soonho-tri commented 6 years ago

Update: I have a WIP branch where the following example works:

Updated: See below.

RussTedrake commented 6 years ago

great!

edrumwri commented 6 years ago

This is so cool. I love symbolic manipulation.


Evan Drumwright Senior Research Scientist http://positronicslab.github.io Toyota Research Institute Palo Alto, CA

On Wed, Apr 4, 2018 at 4:09 PM, Russ Tedrake notifications@github.com wrote:

great!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/RobotLocomotion/drake/issues/8515#issuecomment-378773076, or mute the thread https://github.com/notifications/unsubscribe-auth/ACHwz3hAR3GTokj_lvm8Ii5pxxlNafmuks5tlVKdgaJpZM4TGxl6 .

--

Confidential or protected information may be contained in this email and/or attachment. Unless otherwise marked, all TRI email communications are considered "PROTECTED" and should not be shared or distributed. Thank you.

amcastro-tri commented 6 years ago

That is ridiculously cool!

soonho-tri commented 6 years ago

8569 is in-flight. There is another PR in the queue: https://github.com/soonho-tri/drake/tree/congruence where we can do:

  const Expression e =
      (0.5 * (mh * (2 * (pow(l, 2) * pow(sin(qst), 2)) +
                    2 * (pow(l, 2) * pow(cos(qst), 2)))) +
       0.5 * (m * (2 * (pow((l - b), 2) * pow(sin(qst), 2)) +
                   2 * (pow((l - b), 2) * pow(cos(qst), 2)))) +
       0.5 * (m * (2 * pow((-(l * sin(qst)) - (b * sin((qst + qh)))), 2) +
                   2 * pow(((l * cos(qst)) + (b * cos((qst + qh)))), 2))));

  // Output from sympy.trigsimp
  const Expression expected = (1.0 * l * l * mh + 1.0 * m * (b - l) * (b - l) +
                               1.0 * m * (b * b + 2 * b * l * cos(qh) + l * l))
                                  .Expand();
  // sin²(x) => 1 - cos²(x)
  const Rewriter sin_square =
      MakeRuleRewriter(RewritingRule(sin(x_) * sin(x_), 1 - cos(x_) * cos(x_)));
  // sin(x + y) => sin(x)cos(y) + cos(x)sin(y)
  const Rewriter sin_sum_of_angle = MakeRuleRewriter(
      RewritingRule(sin(x_ + y_), sin(x_) * cos(y_) + cos(x_) * sin(y_)));
  // cos(x + y) => cos(x)cos(y) - sin(x)sin(y)
  const Rewriter cos_sum_of_angle = MakeRuleRewriter(
      RewritingRule(cos(x_ + y_), cos(x_) * cos(y_) - sin(x_) * sin(y_)));
  const Rewriter expander = [](const Expression& e) { return e.Expand(); };

  const Rewriter simplifier =
      MakeFixpointRewriter(MakeCongruenceRewriter(MakeTryRewriter(
          sin_square, sin_sum_of_angle, cos_sum_of_angle, expander)));

  EXPECT_PRED2(ExprEqual, simplifier(e), expected);

@mitiguy and @sherm1 , this is an example of the approach that I wrote in the Slack. I'll write a small documentation tomorrow.

sherm1 commented 6 years ago

Cool! Love to see it tackle @amcastro-tri's example from Slack.

amcastro-tri commented 6 years ago

That's great! how does this rewriter strategy work? is it unidirectional? meaning, is RewritingRule(cos(x_ + y_), cos(x_) * cos(y_) - sin(x_) * sin(y_)) the same as RewritingRule(cos(x_) * cos(y_) - sin(x_) * sin(y_), cos(x_ + y_)) (where I just swapped the arguments. That is, does MakeFixpointRewriter choose in what direction to perform the rewrite based on final expression length or something of the like?

soonho-tri commented 6 years ago

That's great! how does this rewriter strategy work? is it unidirectional?

This is directional. For example:

// sin²(x) => 1 - cos²(x)
RewritingRule sin_sq{sin(x_) * sin(x_), 1 - cos(x_) * cos(x_)};

We can add a method RewritingRule RewritingRule::Reverse() to generate a rule with the opposite direction.

BTW, It's easy to make a non-terminating rewriter. One of the goals is to provide a rich set of combinators so that users can express their intent clearly (and avoid non-termination) while maintaining efficiency.

soonho-tri commented 6 years ago

@amcastro-tri , I'm playing with your (monster) examples. When I have a good result, I'll post it here.

SeanCurtis-TRI commented 6 years ago

If rules are bi-directional, or if two rules are added that are inverses of each other, I'm a bit concerned what that means for the simplification algorithm. What prevents the algorithm from ending up in an infinitely large search space of applying and reversing a rule?

I ask this in ignorance with no sense at all of what the simplification algorithm looks like. I can imagine scenarios where it's greedy and only allows monotonically decreasing lengths which would preclude this type of behavior (and also have no guarantees about optimality because it'll fall into a local minimum).

I can also imagine a "smarter" algorithm that tries to search the space of all valid equivalent expressions which could easily be caught in such a trap. It would be based on the valid principle that, by replacing a short expression with a longer expression, I can then use another rule to collapse that longer expression with subsequent expressions that are shorter even still.

mitiguy commented 6 years ago

@soonho-tri As you implement, will the default behavior be to consistently decrease the number of operations in an expression (in other words, the default is shorter = better), where short means a quantifiable decrease in the number of operations or function-calls (functions like sine, cosine) ? And will this default behavior happen automatically, but without the searching the space of all valid equivalent expressions ?

soonho-tri commented 6 years ago

@SeanCurtis-TRI and @mitiguy : I am adding a combinator to control the behavior of rewriters:

Conditional : Rewriter -> (Expression * Expression -> bool) -> Rewriter

Conditional(rw, predicate)(e) will return rw(e) iff predicate(e, rw(e)) is true, otherwise it returns e (no change). By providing a custom binary predicate, we can provide different measures (i.e. number of sin/cos in an expression) in rewriting.

hongkai-dai commented 5 years ago

@soonho-tri , could you update the status of this issue? Thanks a lot!

jwnimmer-tri commented 3 years ago

Some simplifications (e.g., x / x => 1) might only hold under certain predicates (e.g., x != 0). One idea might be to have an option where simplification aggressively produces both a new Expression, and separately the Formula under which the simplification would hold. In other words, the function would produce two different return values (simplified Expression, guard Formula).

See also a stackoverflow question that relates.

mitiguy commented 3 years ago
  1. From a user point of view, it seems beneficial for a default simplification behavior to shorten expressions and avoid unhelpful divide by zero errors. For mathematical purity, x/x is not always 1. From a practical point of view, how does a divide-by-zero error make my life better ....

  2. Certain simplifications depend on the range of values allowed by an underlying symbol (e.g., real, non-negative, etc.). For example sqrt(L^2) = L is useful when L is a real non-negative number. For symbolic simplification, I find it beneficial to designation certain symbols as inherently non-negative. These include mass, moments of inertia, g (gravity), lengths, coefficients of friction, ...

  3. Certain ranges of values also allow additional simplification, e.g., acos(cos(x)) = x for 0 <= x <= pi. In general, restricting the range of values on a symbol can facilitate simplification, e.g., x != 0, or x >= 0 or 0 <= x <= pi.

AlexandreAmice commented 2 years ago

Are the Python bindings for this PR written? If not where should I look to write these up? I am currently trying to clear the denominators of equality constraints of the form: (1-t(1)2)/(1+t(1)2) = (2t(2))/(1-t(1)**2)) and it seems like there isn't a straightforward way to implement this in pydrake symbolic right now

RussTedrake commented 2 years ago

@AlexandreAmice -- what exactly do you need to do? have a symbolic::Formula made up of rational polynomial expressions, constructed from some algorithm... and you want to multiply through by the denominators until both expressions are polynomial? Isn't that more related to https://reviewable.io/reviews/robotlocomotion/drake/16014 than to simplification?

AlexandreAmice commented 2 years ago

I don't believe that https://reviewable.io/reviews/robotlocomotion/drake/16014 is applicable here, at least not easily.

I have two rational expressions which are equal to each other (and who's denominators will never be 0). I would like to be able to multiply both rational expressions by a common denominator and simplify the expression so that it returns a symbolic::Polynomial since Mathematical Program can only handle symbolic::Polynomial constraints.

This seems better handled by a multiply then simplify strategy.

RussTedrake commented 2 years ago

I understand that you have f = a/b == c/b all polynomials, but trapped in a Formula. Let's make sure we know what would happen in c++ (then we can see if those are available in the bindings).

I think you could "unpack" the Formula using Formula::to_equal_to, then using get_lhs_expression() and get_rhs_expression(). Or maybe you can work with the expressions from the beginning and don't have to unpack the formula.

Now you have two rational expressions. If you could unpack those into numerator and denominator (e.g. with is_division() and to_division()), then you can multiply things through yourself. https://drake.mit.edu/doxygen_cxx/classdrake_1_1symbolic_1_1_expression.html#a8b5a6641d59b4bc6ee08c725e7742b90 says that Expand will simplify division by a constant, but not more. Doing these operations piecemeal is fragile, so that's where the unapply PR might help?

If you have an expression that is polynomial (not rational), but ugly, then converting it to symbolic::Polynomial will take care of a lot of simplification simply by expressing it in terms of the monomials, etc.

jwnimmer-tri commented 2 years ago

Putting this on the backlog for now. We'll probably want to come back here and figure out what to do next about the combinator stuff, but for now I think this is on hold.

jwnimmer-tri commented 1 year ago

For symbolic manipulation, the best option might be to make it easy to convert back and forth between pydrake symbolic and sympy. Then the user could use all of the sympy features to clean up their formulae the way they prefer, but could still feed the final answer into Mathematical Program or etc.