Open RussTedrake opened 6 years ago
I think first to expand the expression and then to apply a trigonometric simplification-rule
∀x. sin²(x) + cos²(x) = 1
would do the job. I will implement the trig-simplification as a separate function and check if this gives us what we want.
FTR: I just checked how sympy is handling this problem. They implemented an algorithm called Fu
from this paper.
FYI, this is the output from sympy.trigsimp
:
>>> e = (0.5 * (mh * (2 * (pow(l, 2) * pow(sin(qst), 2)) + 2 * (pow(l, 2) * pow(cos(qst), 2))))
+ 0.5 * (m * (2 * (pow((l - b), 2) * pow(sin(qst), 2)) + 2 * (pow((l - b), 2) * pow(cos(qst), 2))))
+ 0.5 * (m * (2 * pow(( - (l * sin(qst)) - (b * sin((qst + qh)))), 2) + 2 * pow(((l * cos(qst))
+ (b * cos((qst + qh)))), 2))))
>>> sympy.trigsimp(e)
1.0*l**2*mh + 1.0*m*(b - l)**2 + 1.0*m*(b**2 + 2*b*l*cos(qh) + l**2)
Update: I have a WIP branch where the following example works:
Updated: See below.
great!
This is so cool. I love symbolic manipulation.
Evan Drumwright Senior Research Scientist http://positronicslab.github.io Toyota Research Institute Palo Alto, CA
On Wed, Apr 4, 2018 at 4:09 PM, Russ Tedrake notifications@github.com wrote:
great!
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/RobotLocomotion/drake/issues/8515#issuecomment-378773076, or mute the thread https://github.com/notifications/unsubscribe-auth/ACHwz3hAR3GTokj_lvm8Ii5pxxlNafmuks5tlVKdgaJpZM4TGxl6 .
--
Confidential or protected information may be contained in this email and/or attachment. Unless otherwise marked, all TRI email communications are considered "PROTECTED" and should not be shared or distributed. Thank you.
That is ridiculously cool!
const Expression e =
(0.5 * (mh * (2 * (pow(l, 2) * pow(sin(qst), 2)) +
2 * (pow(l, 2) * pow(cos(qst), 2)))) +
0.5 * (m * (2 * (pow((l - b), 2) * pow(sin(qst), 2)) +
2 * (pow((l - b), 2) * pow(cos(qst), 2)))) +
0.5 * (m * (2 * pow((-(l * sin(qst)) - (b * sin((qst + qh)))), 2) +
2 * pow(((l * cos(qst)) + (b * cos((qst + qh)))), 2))));
// Output from sympy.trigsimp
const Expression expected = (1.0 * l * l * mh + 1.0 * m * (b - l) * (b - l) +
1.0 * m * (b * b + 2 * b * l * cos(qh) + l * l))
.Expand();
// sin²(x) => 1 - cos²(x)
const Rewriter sin_square =
MakeRuleRewriter(RewritingRule(sin(x_) * sin(x_), 1 - cos(x_) * cos(x_)));
// sin(x + y) => sin(x)cos(y) + cos(x)sin(y)
const Rewriter sin_sum_of_angle = MakeRuleRewriter(
RewritingRule(sin(x_ + y_), sin(x_) * cos(y_) + cos(x_) * sin(y_)));
// cos(x + y) => cos(x)cos(y) - sin(x)sin(y)
const Rewriter cos_sum_of_angle = MakeRuleRewriter(
RewritingRule(cos(x_ + y_), cos(x_) * cos(y_) - sin(x_) * sin(y_)));
const Rewriter expander = [](const Expression& e) { return e.Expand(); };
const Rewriter simplifier =
MakeFixpointRewriter(MakeCongruenceRewriter(MakeTryRewriter(
sin_square, sin_sum_of_angle, cos_sum_of_angle, expander)));
EXPECT_PRED2(ExprEqual, simplifier(e), expected);
@mitiguy and @sherm1 , this is an example of the approach that I wrote in the Slack. I'll write a small documentation tomorrow.
Cool! Love to see it tackle @amcastro-tri's example from Slack.
That's great! how does this rewriter strategy work? is it unidirectional? meaning, is RewritingRule(cos(x_ + y_), cos(x_) * cos(y_) - sin(x_) * sin(y_))
the same as RewritingRule(cos(x_) * cos(y_) - sin(x_) * sin(y_), cos(x_ + y_))
(where I just swapped the arguments.
That is, does MakeFixpointRewriter
choose in what direction to perform the rewrite based on final expression length or something of the like?
That's great! how does this rewriter strategy work? is it unidirectional?
This is directional. For example:
// sin²(x) => 1 - cos²(x)
RewritingRule sin_sq{sin(x_) * sin(x_), 1 - cos(x_) * cos(x_)};
We can add a method RewritingRule RewritingRule::Reverse()
to generate a rule with the opposite direction.
BTW, It's easy to make a non-terminating rewriter. One of the goals is to provide a rich set of combinators so that users can express their intent clearly (and avoid non-termination) while maintaining efficiency.
@amcastro-tri , I'm playing with your (monster) examples. When I have a good result, I'll post it here.
If rules are bi-directional, or if two rules are added that are inverses of each other, I'm a bit concerned what that means for the simplification algorithm. What prevents the algorithm from ending up in an infinitely large search space of applying and reversing a rule?
I ask this in ignorance with no sense at all of what the simplification algorithm looks like. I can imagine scenarios where it's greedy and only allows monotonically decreasing lengths which would preclude this type of behavior (and also have no guarantees about optimality because it'll fall into a local minimum).
I can also imagine a "smarter" algorithm that tries to search the space of all valid equivalent expressions which could easily be caught in such a trap. It would be based on the valid principle that, by replacing a short expression with a longer expression, I can then use another rule to collapse that longer expression with subsequent expressions that are shorter even still.
@soonho-tri As you implement, will the default behavior be to consistently decrease the number of operations in an expression (in other words, the default is shorter = better), where short means a quantifiable decrease in the number of operations or function-calls (functions like sine, cosine) ? And will this default behavior happen automatically, but without the searching the space of all valid equivalent expressions ?
@SeanCurtis-TRI and @mitiguy : I am adding a combinator to control the behavior of rewriters:
Conditional : Rewriter -> (Expression * Expression -> bool) -> Rewriter
Conditional(rw, predicate)(e)
will return rw(e)
iff predicate(e, rw(e))
is true, otherwise it returns e
(no change). By providing a custom binary predicate, we can provide different measures (i.e. number of sin/cos in an expression) in rewriting.
@soonho-tri , could you update the status of this issue? Thanks a lot!
Some simplifications (e.g., x / x => 1
) might only hold under certain predicates (e.g., x != 0
). One idea might be to have an option where simplification aggressively produces both a new Expression, and separately the Formula under which the simplification would hold. In other words, the function would produce two different return values (simplified Expression, guard Formula).
See also a stackoverflow question that relates.
From a user point of view, it seems beneficial for a default simplification behavior to shorten expressions and avoid unhelpful divide by zero errors. For mathematical purity, x/x is not always 1. From a practical point of view, how does a divide-by-zero error make my life better ....
Certain simplifications depend on the range of values allowed by an underlying symbol (e.g., real, non-negative, etc.). For example sqrt(L^2) = L is useful when L is a real non-negative number. For symbolic simplification, I find it beneficial to designation certain symbols as inherently non-negative. These include mass, moments of inertia, g (gravity), lengths, coefficients of friction, ...
Certain ranges of values also allow additional simplification, e.g., acos(cos(x)) = x for 0 <= x <= pi. In general, restricting the range of values on a symbol can facilitate simplification, e.g., x != 0, or x >= 0 or 0 <= x <= pi.
Are the Python bindings for this PR written? If not where should I look to write these up? I am currently trying to clear the denominators of equality constraints of the form: (1-t(1)2)/(1+t(1)2) = (2t(2))/(1-t(1)**2)) and it seems like there isn't a straightforward way to implement this in pydrake symbolic right now
@AlexandreAmice -- what exactly do you need to do? have a symbolic::Formula
made up of rational polynomial expressions, constructed from some algorithm... and you want to multiply through by the denominators until both expressions are polynomial? Isn't that more related to https://reviewable.io/reviews/robotlocomotion/drake/16014 than to simplification?
I don't believe that https://reviewable.io/reviews/robotlocomotion/drake/16014 is applicable here, at least not easily.
I have two rational expressions which are equal to each other (and who's denominators will never be 0). I would like to be able to multiply both rational expressions by a common denominator and simplify the expression so that it returns a symbolic::Polynomial
since Mathematical Program can only handle symbolic::Polynomial
constraints.
This seems better handled by a multiply then simplify strategy.
I understand that you have f = a/b == c/b
all polynomials, but trapped in a Formula
. Let's make sure we know what would happen in c++ (then we can see if those are available in the bindings).
I think you could "unpack" the Formula
using Formula::to_equal_to
, then using get_lhs_expression()
and get_rhs_expression()
. Or maybe you can work with the expressions from the beginning and don't have to unpack the formula.
Now you have two rational expressions. If you could unpack those into numerator and denominator (e.g. with is_division()
and to_division()
), then you can multiply things through yourself. https://drake.mit.edu/doxygen_cxx/classdrake_1_1symbolic_1_1_expression.html#a8b5a6641d59b4bc6ee08c725e7742b90 says that Expand
will simplify division by a constant, but not more. Doing these operations piecemeal is fragile, so that's where the unapply
PR might help?
If you have an expression that is polynomial (not rational), but ugly, then converting it to symbolic::Polynomial
will take care of a lot of simplification simply by expressing it in terms of the monomials, etc.
Putting this on the backlog for now. We'll probably want to come back here and figure out what to do next about the combinator stuff, but for now I think this is on hold.
For symbolic manipulation, the best option might be to make it easy to convert back and forth between pydrake symbolic and sympy. Then the user could use all of the sympy features to clean up their formulae the way they prefer, but could still feed the final answer into Mathematical Program or etc.
Sorry for not stream-lining my example, but I was playing with deriving some equations of motion using symbolic::expression (and numpy makes some of this syntax quite gross)
this currently prints
which actually reduces to something like
(but it's pretty hard to see!)