Open JeffreySarnoff opened 4 years ago
There are certainly many improvements that are possible.
BigFloats are used only for output -- I guess that's what you are referring to. I'm not sure how to avoid them easily there. You are probably right that I need to be more careful with rounding there, and also to be careful with the precision.
I called it largest_n
since it keeps track of the largest value of n
that has been calculated so far. What is the i
in ivalue
?
x.msd
is a memoized version that is checked on the first line of the msd
function.
The compare
function does not terminate given (x, x)
-- reals cannot be checked for equality. But \approx
should definitely be implemented.
There are apparently various ways of calculating pi
using sums of infinite series. A nice reference is https://www.doc.ic.ac.uk/teaching/distinguished-projects/2011/t.spratt.pdf (an undergrad thesis!)
i_nteger_value
Doesn't largest_n
really keep track of the current "resolution" of valuative computation, which is always going to be the largest value of n
so far.
This is a real number: 1 why is it not possible to check that for equality with itself?
Yes it is the current "resolution" and is the largest value of n
seen to date (hence the name). I'm happy to change it to something better.
An exact real is given by a countable number of values x(n)
or x_n
, so it's impossible to know if there's one down the line that is different.
I don't know how you could encode whether a given ExactReal
is rational or not. (I only just started reading this stuff recently and don't have a good feel for it at all.)
I know there have been radically distinct implementation efforts and theoretical approaches over the past 20 years. I do not know which is considered best in class nor why one may be favored. We should start with some shared understanding of what sort of numerical entity this is to be -- I will read the pdf you linked.
What are you planning to do with this, and why does that require more 2,500 decimal digits for some of the values?
And why prefer this to very resolved rationals, where the precision is always available and a quick estimate of that precision is rapidly obtainable? Is it expected to be faster? probably ... unless evaluation gets bogged down with maintaining the interval edges -- maybe a ball as point + inclusive radius
approach makes sense here. The way it is done in Arb is not all that well resolved (30 bit radii, albeit relative bits). We could do much better without extraordinary means. No idea of the time required to implement this that way, though.
That is interesting -- a very different approach.
I'm not necessarily planning to do anything particular with it -- I just wanted to explore the idea and implementation.
Big rationals don't suffice to represent pi
etc.
Using arb and recalculating with higher precision when necessary is probably a decent solution. I think that is similar to what iRRAM does?
Many of the few exact reals implementations use GMP|MPFR or similar libraries .. although they apply the libs differently or, more likely, divide the work in their own ways. I looked at the state of the art in the 1990s (iRRAM, Bohem's "Calculator") and my take-away then was "sheesh this software takes forever and a day to work as advertised". If there is something with promise for Julia (or for Julia as academician) in this arena, it would be predicated on "hey, that works much faster than expected".
this is correct or willfix .. (spent a month on the substrative logic) .. no relevant tests exist yet
While unadvertised, ArbNumerics
now abides programming with variable precision rather nicely. You do not have to manage the precision of all vars used in, say, an iteratively refining computation, it just works when one keeps the focal var (or few focal vars if not all iterative refinement steps utilize the same focal var) appropriately precisioned. Moreover, altering the precision of an active var is peachy-keen, too; it is not necessary, though it may be appropriate, to create a new var for each precision adjustment. (again, untested).
You are using the quasi-fact that BigFloats know how to do arithmetic with directed rounding. It is necessary to assure that effective precision of a BigFloat when doing the rounding is more than enough to accomodate however many sigbits there are in the struct's
BigInt
field.Maybe
msd` is not getting enough sleep:
n
andivalue
would be clearer thanlargest_n
largest_value
as fieldnames.it is described as "minimum significant digit" and "most significant digit"
You compute
msd(x), msd(y)
in*
whenx.msd
andy.msd
are fields -- are the fields not kept accurately? and if not -- why have them?Does the current compare function terminate given
(x, x)
?How would it obtain Pi to better than n digits/bits (cheating by using BigFloat is expected)?