Open jdemeyer opened 8 years ago
Last 10 new commits:
ab117da | complex_ball: abort() --> sig_error() |
1ee8c16 | ref manual: rm reference to arb being optional |
865e6d5 | {real,complex}_arb: more doc on precision issues |
d289538 | {real,complex}_arb: move SEEALSO blocks after EXAMPLES |
fe1b071 | RealBall: clarify doc of upper(), lower(), endpoints() |
a38574a | complex_arb is no longer experimental |
d5af324 | {real,complex}_arb: minor doc fixes |
f43e94a | RealBall: minor change to __hash__ |
7e6b5d0 | real_arb: minor doc fix |
638f7f5 | Always round balls to the precision of the parent |
IMO saying that “arb balls can have a precision different from their parent” isn't the right way to think about the issue. For example, RealBallField(100)(1)
and RealBallField(200)(1)
both have midpoints whose mantissa fits on 1 bit and zero radius: what is their “precision”? The way I view it, they have none by themselves, and having RealBallField
s of different precisions is just a convenient way of specifying the precision at which the results of operations on balls should be rounded.
I don't find the way things currently work particularly confusing, while I think it can be useful in “advanced” use (to compute the result of a subtraction involving cancellation, say). I should also say that real balls won't really “have the precision of their parent” even after your change: at the very least it will still be possible to create balls whose midpoint's precision is less than the precision of the parent—and that's a good thing, but I'm not sure it is less confusing than the behavior you are trying to change.
So personally I'm certainly not going to give this ticket positive_review. That being said, I have no strong objection either to rounding integers etc. by default when creating new balls. It just makes things a bit less flexible for no good reason.
Replying to @mezzarobba:
IMO saying that “arb balls can have a precision different from their parent” isn't the right way to think about the issue. For example,
RealBallField(100)(1)
andRealBallField(200)(1)
both have midpoints whose mantissa fits on 1 bit and zero radius: what is their “precision”?
By definition, the precision is the precision of the parent. There is nothing confusing about this. It is exactly like everything else in Sage.
I don't find the way things currently work particularly confusing, while I think it can be useful in “advanced” use (to compute the result of a subtraction involving cancellation, say).
We shouldn't aim in the first place to "advanced" use, we should aim in the first place for intuitive use. This means in particular being compatible with the rest of Sage.
I should also say that real balls won't really “have the precision of their parent” even after your change: at the very least it will still be possible to create balls whose midpoint's precision is less than the precision of the parent—and that's a good thing
That's an implementation detail. It doesn't matter from the user's point of view.
So personally I'm certainly not going to give this ticket positive_review. That being said, I have no strong objection either to rounding integers etc. by default when creating new balls. It just makes things a bit less flexible for no good reason.
There is a good reason, read the ticket description.
In case it's not clear: my main objection is really that you're doing things too much different from other numerical rings in Sage.
Replying to @jdemeyer:
There is a good reason, read the ticket description.
Equality of balls is rare, it only holds if either the objects are identical (is
is a subset of ==
) or if both balls are exact and equal. As RBF(1.3)
is not exact, it should not be surprising that a+0 == b+0
returns True
.
A similar example --- and this is not related to the purpose of the ticket as far as I can understand it --- is
sage: a = RBF(1.3)
sage: a == a
True
sage: a + 1 == a + 1
False
Description changed:
---
+++
@@ -1,10 +1,12 @@
The fact that `arb` balls can have a precision different from their parent is really confusing. It's different from anything else in Sage and leads to strange things like
-sage: a = RealBallField(20)(1.3) -sage: b = RealBallField(53)(1.3) -sage: a == b +sage: from sage.rings.real_arb import RealBallField +sage: sage: from sage.rings.real_arb import RealBallField +sage: sage: a = RealBallField(20)(1.3) +sage: sage: b = RealBallField(53)(1.3) +sage: a.identical(b) True -sage: a+0 == b+0 +sage: (a+0).identical(b+0) False
Replying to @jdemeyer:
Equality of balls is rare, it only holds if either the objects are identical
Seriously? Even more crazy arb
stuff:
sage: from sage.rings.real_arb import RBF
sage: a = RBF(1/3)
sage: b = RBF(1/3)
sage: a.identical(b)
True
sage: a == b
False
sage: a == a
True
Replying to @jdemeyer:
Replying to @jdemeyer:
Equality of balls is rare, it only holds if either the objects are identical
Seriously? Even more crazy
arb
stuff:sage: from sage.rings.real_arb import RBF sage: a = RBF(1/3) sage: b = RBF(1/3) sage: a.identical(b) True sage: a == b False sage: a == a True
Which of the above do you consider crazy?
Replying to @cheuberg:
Replying to @jdemeyer:
Replying to @jdemeyer:
Equality of balls is rare, it only holds if either the objects are identical
Seriously? Even more crazy
arb
stuff:sage: from sage.rings.real_arb import RBF sage: a = RBF(1/3) sage: b = RBF(1/3) sage: a.identical(b) True sage: a == b False sage: a == a True
Which of the above do you consider crazy?
sage: a == a
True
Replying to @jdemeyer:
Replying to @cheuberg:
Replying to @jdemeyer: Which of the above do you consider crazy?
sage: a == a True
We discussed this behaviour (introduced for consistency with the aliasing of input arguments) in #17194, starting at #17194 comment:63.
Right. It is consistent with the "equal pointers means equal values" idea though, so at least there is some justification.
Comparison of floating point numbers is generally wrought with precision issues and best avoided, so if thats all then I'd rather stay as close to the libarb behavior as possible, whatever it is. Balls (like intervals and normal floats) are just a tool, none of them are "real numbers". But if a Sage arb computation doesn't map directly to a libarb computation then I'd be very confused.
Replying to @vbraun:
But if a Sage arb computation doesn't map directly to a libarb computation then I'd be very confused.
This ticket isn't really about whether or not a Sage computation maps to a libarb computation. It's more about how it maps to a libarb computation.
More precisely, it's about the meaning of precision: currently, arb
handles precision in a way which is completely different from anything else in Sage (the precision of an element can be different from the precision of the parent). This has consequences such as a
and a + 0
not always being the same.
On this ticket, I propose to handle precision in arb
the same way as for example RR
or RIF
. This means that the precision is determined by the parent: an element has the precision of its parent.
Description changed:
---
+++
@@ -10,3 +10,5 @@
sage: (a+0).identical(b+0)
False
+
+I propose to handle precision in arb
the same way as for example RR
or RIF
. This means that the precision is determined by the parent: an element has the precision of its parent.
The way I understand it, what this ticket really does is not to make “the precision of arb balls” that of their parent, but to round newly created balls to the precision of their parent by default. IMO this is a regression compared to the existing implementation (so I'd be in favor of closing the present ticket as wontfix), but a small one (so I'm not going to further argue against it, I just won't review it).
Incidentally, I don't see how “arb
handles precision in a way which is completely different from anything else in Sage”. For example, you can have power series truncated at any order in a PowerSeriesRing
with any default_prec
.
Replying to @jdemeyer:
This ticket isn't really about whether or not a Sage computation maps to a libarb computation. It's more about how it maps to a libarb computation.
Thats of course technically true, but you know what I meant:
If a Sage arb computation maps to a libarb computation with additional manual rounding thrown in then I'd be very confused.
Replying to @vbraun:
If a Sage arb computation maps to a libarb computation with additional manual rounding thrown in then I'd be very confused.
Well, both acb_set_fmpz
and acb_set_round_fmpz
are arb
API functions, so it's matter of choosing between them. It's not "manual rounding thrown in", it's just calling a different function.
But even then, I think that it doesn't really matter how libarb
does stuff internally: that's an implementation detail. What I propose here is simply to handle precision in arb
the same way we handle precision in RR
and RIF
.
Replying to @mezzarobba:
The way I understand it, what this ticket really does is not to make “the precision of arb balls” that of their parent, but to round newly created balls to the precision of their parent by default.
I really don't understand the difference between those 2 things. Yes, I propose to "round newly created balls to the precision of their parent by default". But that's just the implementation I need to achieve that the precision of arb balls is that of their parent.
Replying to @mezzarobba:
Incidentally, I don't see how “
arb
handles precision in a way which is completely different from anything else in Sage”.
OK, let me explain: in arb
, parents have a precision. Also elements have a precision, which can be different from the precision of the parent. But, when doing any operation, the result gets rounded to the precision of the parent. There is nothing in Sage which works this way.
This leads to things like
sage: from sage.rings.real_arb import RBF
sage: a = RBF(3^100)
sage: a.identical(a+0)
False
I don't know any Sage ring element which changes when adding 0
.
The difference with PowerSeriesRing
is that the parent doesn't have a precision, only a default precision. A PowerSeriesRing
can contain elements of any precision and that precision is taken into account when doing arithmetic:
sage: R.<x> = PowerSeriesRing(QQ)
sage: a = x + O(x^30); a
x + O(x^30)
sage: a + 0
x + O(x^30)
sage: a = x + O(x^10); a
x + O(x^10)
sage: a + 0
x + O(x^10)
The equivalent of what arb
currently does would be
sage: R.<x> = PowerSeriesRing(QQ)
sage: a = x + O(x^30); a
x + O(x^30)
sage: a + 0
x + O(x^20)
Replying to @jdemeyer:
The difference with
PowerSeriesRing
is that the parent doesn't have a precision, only a default precision. APowerSeriesRing
can contain elements of any precision and that precision is taken into account when doing arithmetic:
Yes, I agree that the analogy is not perfect. But I believe the example of power series rings shows that there is no single way of handling precision that is used uniformly across all sage parents, and that what we do with ball fields is not completely different from anything else in sage.
The way I view it, balls (elements) do not need a precision, because their accuracy is determined by their diameter. Only operations between balls have a precision, provided by the parent. It is up to the implementation of each operation to decide how large the mantissa of the center of its result needs to be, and this size may or may not be the precision of the operation (as far as I know, arb doesn't guarantee anything regarding the relation between these two quantities).
Replying to @mezzarobba:
I believe the example of power series rings shows that there is no single way of handling precision that is used uniformly across all sage parents
Well, also keep in mind that the "precision" in power series (or p-adics, which are analogous) is more of an "algebraic" nature. You can see O(x^n)
as modding out by the ideal (x^n)
. There is no such thing for floating-point numbers and it would make a lot of sense to handle precision in RBF
the same way as other floating-point rings such as RR
and RIF
.
and that what we do with ball fields is not completely different from anything else in sage.
Maybe not completely different, but certainly significantly different.
That's also the reason why I got so confused with the round()
function when reviewing #19152. I think that most people who use RealBallField
will have the same confusion as me. I just see no reason why you insist on doing things differently for arb
.
It is up to the implementation of each operation to decide how large the mantissa of the center of its result needs to be, and this size may or may not be the precision of the operation (as far as I know, arb doesn't guarantee anything regarding the relation between these two quantities).
You are right, the arb
documentation is lacking: https://github.com/fredrik-johansson/arb/issues/65
Branch pushed to git repo; I updated commit sha1. New commits:
86783ae | Merge tag '6.10.rc0' into t/19568/arb_balls_should_have_the_precision_of_the_parent |
Changed dependencies from #19152 to none
Branch pushed to git repo; I updated commit sha1. New commits:
9a25f5e | Merge tag '7.1.beta2' into t/19568/arb_balls_should_have_the_precision_of_the_parent |
Merge conflict.
@cheuberg: do you actually care about this ticket? If you do, then I will try to fix it.
Replying to @jdemeyer:
@cheuberg: do you actually care about this ticket? If you do, then I will try to fix it.
I am not convinced by the idea; so the answer is probably no, sorry. The ticket just turned up on a patchbot which I am experimenting with.
CC vdelecroix because you are working on various arb tickets.
Description changed:
---
+++
@@ -2,9 +2,8 @@
sage: from sage.rings.real_arb import RealBallField -sage: sage: from sage.rings.real_arb import RealBallField -sage: sage: a = RealBallField(20)(1.3) -sage: sage: b = RealBallField(53)(1.3) +sage: a = RealBallField(20)(1.3) +sage: b = RealBallField(53)(1.3) sage: a.identical(b) True sage: (a+0).identical(b+0)
I am not sure that I would accept the proposal of this ticket in its current form. As already discussed above, the concept of precisions are very different in mpfr
(mpc
/mpfi
) and arb
. Namely, in mpfr
this is a guaranteed precision of the result whereas in arb
it is mostly a working precision. In arb
the only way that you can achieve tight result in the output is to make a loop with increasing precision. This is reflected in the fact that in mpfr_XXX
function there is no precision argument whereas all arb_XXX
ends with a long prec
.
I have no magic solution, but:
mpfr
and arb
are not the the same thingReplying to @videlec:
As already discussed above, the concept of precisions are very different in
mpfr
(mpc
/mpfi
) andarb
.
I would say that this is an implementation detail of the library which should not affect how Sage deals with these objects.
Replying to @videlec:
when you are lucky and get more precision, why would you truncate it
With that argument, you might as well use larger precision for everything everywhere. I mean, we could change RR
such that RR(some_large_int)
stores the integer with full precision. We don't do that because everybody expects that elements of RR
have only 53 bits of precision.
I really don't see why arb balls should be different. And that is really my main concern for this ticket: perhaps the arb semantics make sense in a certain way. But the arb semantics are really surprising because they are different from anything else in Sage.
Replying to @jdemeyer:
Replying to @videlec:
when you are lucky and get more precision, why would you truncate it
With that argument, you might as well use larger precision for everything everywhere. I mean, we could change
RR
such thatRR(some_large_int)
stores the integer with full precision. We don't do that because everybody expects that elements ofRR
have only 53 bits of precision.
As I already said RBF(some_large_int)
should truncate the entry but currently does not (it uses arb_set_fmpz
). I am a big +1 for such a change in our arb interface. There are two different questions:
RBF(something)
: in this case I would like the truncation to always operatemy_ball.some_function()
: where I do not know what is the best solutionI really don't see why arb balls should be different. And that is really my main concern for this ticket: perhaps the arb semantics make sense in a certain way. But the arb semantics are really surprising because they are different from anything else in Sage.
They are different in the case 2. above. If you compute the logarithm of my_ball
, you have no idea a priori of the precision of log(my_ball)
. This is not the case in mpfr
where the precision is somehow statically encoded in the datastructure. You can always extend/truncate the result but this is rather artificial. The exact reason why I do not like it.
FWIW, after now a few years of regularly working with RBF/CBF, I am more and more convinced that the current implementation is better than truncating every time. In particular, it allows you to compare balls belonging to different parents in a meaningful way, even though they get coerced to the parent with the lower precision.
Replying to @videlec:
As I already said
RBF(some_large_int)
should truncate the entry but currently does not (it usesarb_set_fmpz
). I am a big +1 for such a change in our arb interface.
That is exactly what this ticket is about... and nothing else. So I clearly missed something in your comment [comment:31]
The title says arb balls should have the precision of the parent
which is more general than what I proposed. I changed the description accordingly.
Description changed:
---
+++
@@ -1,4 +1,6 @@
-The fact that `arb` balls can have a precision different from their parent is really confusing. It's different from anything else in Sage and leads to strange things like
+In general `arb` balls can have a precision different from their parent. Though, when converting to a parent, the precision should be set to the one of the parent. Especially when the input is an integer or or a rational.
+
+Because of this, we get strage behavior such as
sage: from sage.rings.real_arb import RealBallField @@ -10,4 +12,4 @@ False
-I propose to handle precision in `arb` the same way as for example `RR` or `RIF`. This means that the precision is determined by the parent: an element has the precision of its parent.
+This ticket proposes to make it so that when `R(x)` is called the result is a ball with the exact precision of `R`.
Description changed:
---
+++
@@ -1,4 +1,4 @@
-In general `arb` balls can have a precision different from their parent. Though, when converting to a parent, the precision should be set to the one of the parent. Especially when the input is an integer or or a rational.
+In general `arb` balls can have a precision different from their parent. Though, when converting to a ball parent, the precision should be set to the precision of the parent. Especially when the input is an integer or or a rational.
Because of this, we get strage behavior such as
Description changed:
---
+++
@@ -1,6 +1,6 @@
In general `arb` balls can have a precision different from their parent. Though, when converting to a ball parent, the precision should be set to the precision of the parent. Especially when the input is an integer or or a rational.
-Because of this, we get strage behavior such as
+Because of this, we get strange behavior such as
sage: from sage.rings.real_arb import RealBallField
Replying to @videlec:
Namely, in
mpfr
this is a guaranteed precision of the result whereas inarb
it is mostly a working precision. Inarb
the only way that you can achieve tight result in the output is to make a loop with increasing precision. This is reflected in the fact that inmpfr_XXX
function there is no precision argument whereas allarb_XXX
ends with along prec
.
Aren't you confusing two things here?
When I say "precision" I mean the number of bits stored in the data structure, regardless of how these bits were obtained. This is a priori unrelated to the error margin (the difference between the mathematical result and the output number).
MPFR makes guarantees about both: you know how many bits the output number has and you know that the number is exactly rounded.
With arb, you still know the number of bits which is used to represent the midpoint of the output ball. That is, the prec
argument of the function which was called. This is clearly documented in http://arblib.org/using.html:
Given a ball [m±r] with m∈R (not necessarily a floating-point number), we can always round m to a nearby floating-point number that has at most most prec bits in the component u, and add an upper bound for the rounding error to r. In Arb, ball functions that take a prec argument as input (e.g. arb_add()) always round their output to prec bits. Some functions are always exact (e.g. arb_neg()), and thus do not take a prec argument.
So I think that "says arb balls should have the precision of the parent" is equivalent to "RBF(x) should have the precision of RBF" unless I am misunderstanding something.
In general
arb
balls can have a precision different from their parent. Though, when converting to a ball parent, the precision should be set to the precision of the parent. Especially when the input is an integer or or a rational.Because of this, we get strange behavior such as
This ticket proposes to make it so that when
R(x)
is called the result is a ball with the exact precision ofR
.CC: @videlec @mezzarobba @cheuberg
Component: interfaces
Author: Jeroen Demeyer
Branch/Commit: u/jdemeyer/arb_balls_should_have_the_precision_of_the_parent @
9a25f5e
Issue created by migration from https://trac.sagemath.org/ticket/19568