JuliaLang / julia

The Julia Programming Language
https://julialang.org/
MIT License
45.77k stars 5.49k forks source link

Operations on `BigFloat` don't seem to respect precision set in the constructor #41171

Open ForceBru opened 3 years ago

ForceBru commented 3 years ago

I would expect operators to respect the precision specified in the BigFloat constructor, but they seem to only care about the global precision, even when both numbers are high-precision BigFloats:

julia> setprecision(4)  # low default precision
4

julia> BigFloat(1) / 3
0.344  # OK, low precision

julia> BigFloat(1, precision=100) / 3  # specify HIGH precision!
0.344  # low precision still!

julia> BigFloat(1, precision=100) / BigFloat(3, precision=100)  # specify HIGH precision for BOTH operands
0.344  # low precision anyway

julia> versioninfo()
Julia Version 1.6.1
Commit 6aaedecc44 (2021-04-23 05:59 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin18.7.0)
  CPU: Intel(R) Core(TM) i5-3330S CPU @ 2.70GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, ivybridge)

I initially stumbled upon this when answering this question on Stack Overflow: https://stackoverflow.com/q/67919412/4354477

thofma commented 3 years ago

Related discussion: https://discourse.julialang.org/t/is-bigfloat-loss-of-precision-intended/61728

JeffreySarnoff commented 3 years ago

To address this properly involves reworking BigFloat -- it would be a [very welcome, widely applauded] breaking change. Or, should I call it a bugfix?

thofma commented 3 years ago

Which precision should be used if both operands have different precisions?

JeffreySarnoff commented 3 years ago

Prefer the operands have the same precision.

Where they do not, to provide the result at the higher precision can misrepresent the information. Best to copy the lower precision value into a higher precision variable and do the computation with the shared higher precision values. Convert the result to the lower precision and return that.

In some cases, e.g. multiplying two numbers, using the higher precision throughout and returning the higher precision result maximizes the accurate information conveyed. Another example of where using the higher precision makes sense is when iteratively refining the accuracy of a result that is best done by increasing the precision in steps.

So, the best rule is to understand the computation and act accordingly.

oscardssmith commented 3 years ago

I think that we should always be using the maximum precision. It's more analogous to what we do for Float32+Float64, and is easier to reason about. If you want bounds on precision, you should be using interval methods, or a library like arb which explicitly tracks the error.

JeffreySarnoff commented 3 years ago

I have no issues with that policy. It does make it simpler for clients to "keep track" of what is going on, and the caller may always force a reduction in precision -- although we should provide a more reliable mechanism for that.

Do you mean always use the value of precision(BigFloat) or always use maximum(precision.([args...]))?

oscardssmith commented 3 years ago

The second.

JeffBezanson commented 3 years ago

If there is going to be a global precision setting like we have now (and i'm not saying there should be), then it basically has to work like it does now. If precision is derived from operands, changing the global setting almost never has any effect. Say x is a BigFloat and I do x / pi. pi has to be converted to BigFloat first, and unfortunately the conversion can only see the type so it has to use the global precision. If the global precision was what you wanted, we should keep using it instead of considering x. If x's precision was what you wanted, it will get polluted if pi's precision is bigger.

LilithHafner commented 1 year ago

Worse still, if x's precision is what you wanted and it's higher than the global precision, pi will still get converted at the global precision, polluting the least significant bits of x after the division.