Open videlec opened 9 years ago
Changed keywords from none to sd66
In exactify
there is a bit of code which adjusts the recursion limit, increasing it by 10 for every recursive call so it will never be reached. It was introduced in 42b0fb3 to address #2638. I guess we could use the same for _interval_fast
as well.
On the other hand, we could also try addressing the source of this deep recursion. The way I see it, that's because addition is left associative, so that cyclotomic polynomial will be a very deep but thin binary tree. If we had a representation which describes a sum (and perhaps also a product) of an arbitrary number of algebraic numbers using a single descriptor, the data structure would become much more shallow.
As a third solution, we might set up our own evaluation machinery for these trees, with our own stack instead of Python recursion. I haven't yet worked out all the details, but if this sounds interesting I might write some code to see how this approach feels.
The way I see it, since the backtrace is about _interval_fast
and _more_precision
, all of this is happening before exact computation is triggered, right? Do we have any way to find out that exact computation might in this case be faster than repeated numeric refinement? I fear we have no way to detect this, but if someone has an idea, please share it.
Replying to @gagern:
On the other hand, we could also try addressing the source of this deep recursion. The way I see it, that's because addition is left associative, so that cyclotomic polynomial will be a very deep but thin binary tree. If we had a representation which describes a sum (and perhaps also a product) of an arbitrary number of algebraic numbers using a single descriptor, the data structure would become much more shallow.
Looks like a good idea to have a polynomial descriptor for one (or several?) algebraic numbers. It might even be used to get faster and more precise interval evaluations.
As a third solution, we might set up our own evaluation machinery for these trees, with our own stack instead of Python recursion. I haven't yet worked out all the details, but if this sounds interesting I might write some code to see how this approach feels.
Looks reasonable to do it without recursion. We might obtain a good speed up.
The way I see it, since the backtrace is about
_interval_fast
and_more_precision
, all of this is happening before exact computation is triggered, right?
Right!
Do you find reasonable to open two tickets:
Vincent
No, I'd keep it in one ticket, although it certainly makes sense to have multiple branches. But I see these things as complementary: if either one works well, the other might become obsolete. And to decide that, we need to compare them, which is easier when we have a single ticket. Once one thing is ready to be merged, then we can move the remaining idea to a new ticket.
Description changed:
---
+++
@@ -1,3 +1,4 @@
+In QQbar, there are currently many operations that involve recursive calls. For example, this rather simple example gives an error because the Python stack gets filled:
sage: a = QQbar.zeta(1009) @@ -11,4 +12,8 @@ RuntimeError: maximum recursion depth exceeded
+In this ticket:
+- we make sum and product be n-ary operators instead of binary
+- we remove all recursive call to become depth first search
+
Such behavior prevents avoiding lazy fields (RLF and CLF) for number field embeddings (see e.g. #18103).
Description changed:
---
+++
@@ -13,7 +13,8 @@
In this ticket: +- introduce a new (one variable) polynomial descriptor
-Such behavior prevents avoiding lazy fields (RLF and CLF) for number field embeddings (see e.g. #18103). +The current behavior prevents avoiding lazy fields (RLF and CLF) for number field embeddings (see e.g. #18103).
Changed keywords from sd66 to sd66, qqbar
FWIW, in Sage9.1beta3:
sage: a = QQbar.zeta(1009)
sage: p = cyclotomic_polynomial(1009)
sage: b = p(a)
sage: b
0
sage: b == 0
True
I arrived on this ticket because of #28599.
Oh my, increasing the recursion limit is a horrible hack. There's a reason why python doesn't like to do deep recursions: stack management in C means that C generally doesn't like to do it, and CPython implements recursion by doing recursion in C. So along with the Python frame stack, the C call stack is also getting deeper.
Before you start changing the recursion limit in Python, you really want to make sure you can't accomplish the same thing in another way. In particular, you should make sure that the "recursion" really gets used: basically that stack depth will only grow logarithmically with problem size (but a little beyond the default 1000 that is normally set) The patch from #2638 seems to be still in force. If it is at all possible to rewrite that code so that recursion limit increases are not necessary, we'd have a significant improvement in our code base.
In QQbar, there are currently many operations that involve recursive calls. For example, this rather simple example gives an error because the Python stack gets filled:
In this ticket:
The current behavior prevents avoiding lazy fields (RLF and CLF) for number field embeddings (see e.g. #18103).
CC: @gagern
Component: number fields
Keywords: sd66, qqbar
Issue created by migration from https://trac.sagemath.org/ticket/18106