Open ForceBru opened 3 years ago
Related discussion: https://discourse.julialang.org/t/is-bigfloat-loss-of-precision-intended/61728
To address this properly involves reworking BigFloat -- it would be a [very welcome, widely applauded] breaking change. Or, should I call it a bugfix?
Which precision should be used if both operands have different precisions?
Prefer the operands have the same precision.
Where they do not, to provide the result at the higher precision can misrepresent the information. Best to copy the lower precision value into a higher precision variable and do the computation with the shared higher precision values. Convert the result to the lower precision and return that.
In some cases, e.g. multiplying two numbers, using the higher precision throughout and returning the higher precision result maximizes the accurate information conveyed. Another example of where using the higher precision makes sense is when iteratively refining the accuracy of a result that is best done by increasing the precision in steps.
So, the best rule is to understand the computation and act accordingly.
I think that we should always be using the maximum precision. It's more analogous to what we do for Float32+Float64
, and is easier to reason about. If you want bounds on precision, you should be using interval methods, or a library like arb which explicitly tracks the error.
I have no issues with that policy. It does make it simpler for clients to "keep track" of what is going on, and the caller may always force a reduction in precision -- although we should provide a more reliable mechanism for that.
Do you mean always use the value of precision(BigFloat)
or always use maximum(precision.([args...]))
?
The second.
If there is going to be a global precision setting like we have now (and i'm not saying there should be), then it basically has to work like it does now. If precision is derived from operands, changing the global setting almost never has any effect. Say x
is a BigFloat and I do x / pi
. pi
has to be converted to BigFloat first, and unfortunately the conversion can only see the type so it has to use the global precision. If the global precision was what you wanted, we should keep using it instead of considering x
. If x
's precision was what you wanted, it will get polluted if pi
's precision is bigger.
Worse still, if x
's precision is what you wanted and it's higher than the global precision, pi
will still get converted at the global precision, polluting the least significant bits of x
after the division.
I would expect operators to respect the precision specified in the
BigFloat
constructor, but they seem to only care about the global precision, even when both numbers are high-precisionBigFloat
s:I initially stumbled upon this when answering this question on Stack Overflow: https://stackoverflow.com/q/67919412/4354477