Open tixxit opened 9 years ago
Hi Tom -
However, for an expression like x + y, we only need to approximate the value if x.signum * y.signum == -1
Could you please say more on what approximate
means?
Adding a fastSignum method which just handles these shortcuts would let us test the sign cheaply before performing unnecessary work.
Could you please say more on these two cases, when doing x + y
:
fastSignum(x, y) == -1
fastSignum(x,y) != -1
Lastly, since signum
is a function on real numbers, all classes implementing trait Real
would need to implement fastSignum
too?
Thanks
So, Algebraic
works by basically storing an expression tree/AST, rather than an actual number. This allows us to approximate the number to any desired precision, whenever we want. It does this by propagating error bounds down through the tree. For instance, in order to guarantee that an approximation of (x + y) has at least N bits of precision, we need approximation x and y to N+1 digits of precision first, before adding them.
So - how does this relate to signum
? One of the guarantees of Algebraic
is that its signum
operation is exact. So, if x = -y
, then (x + y).signum == 0
. To do this, Algebraic
uses what's called a separation bound (also, "zero bound function", depending on which paper you are reading). Basically, the separation bound of the expression is a lower bound on the value of the expression if it is not zero (the conditional is important). This gives us a way to test if a number is zero or not. If we approximate the number so that the error is smaller than 1/2 the separation bound, then the "approximation +/- error" cannot contain both 0 and the separation bound. If the abs(approximation) + error
is less than the separation bound, then we can conclude the number (and sign) is zero, otherwise it isn't and the sign of the value is also the sign of the approximation.
In the case of many operations though, testing the sign is much easier and doesn't require an approximation. For example, (x * y).signum == x.signum * y.signum
. Same goes for division. For nroots/pow, the sign depends on the sign of the sub-expression and whether or not the exponent is odd. Addition is the only operation that actually requires us to do the approximation/separation-bound check discussed above. As you can imagine, calculating the approximation can be very expensive! Even though we use an adaptive approximation approach (99% of the time a fairly bad approximation is enough to tell us the value is not 0), it is nice if we can avoid it altogether.
This issue is more that if we have (x + y).signum
, there are some shortcuts we can take in certain cases. For instance, if (x.signum * y.signum != -1)
, then either x
and y
are both non-negative or both non-positive, so we can determine the sign just from the sign of x
and y
. Another idea is that we have upper and lower (log) bounds on all expressions that can be computed much faster than approximations. If x
and y
are fairly far apart, we may be able to determine with just these rough bounds that x
dominates y
(or vice versa) so much that it would be impossible to change x
's (or y
's`) sign just through addition/subtraction.
Does this help?
As far as Real
is concerned - it is a different type (computable reals), so these 2 wouldn't affect each other. But in any case, fastSignum
is purely just an implementation detail of Algebraic
and I wouldn't intend for it to be used outside of Algebraic.scala
.
That helped me a bit, but a lot's still over my head. Thank you for that detailed reply.
I posted an initial follow-up question here so as to not take up too much real estate of this issue.
In many cases we can likely compute the sign much quicker than we have been. Specifically, in the case of addition/subtraction, we immediately fall to computing an approximation. However, for an expression like x + y, we only need to approximate the value if
x.signum * y.signum == -1
. Adding afastSignum
method which just handles these shortcuts would let us test the sign cheaply before performing unnecessary work.