upiterbarg / mpmath

Automatically exported from code.google.com/p/mpmath
Other
0 stars 0 forks source link

allow to use python float & complex instead of mpf, mpc #32

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
mpf and mpc are a lot slower than python floats and comlpexes. Sometimes
I'd like to take advantage of all the nice algorithms in mpmath (like
special functions, ODE solvers), but I'd like them to be executing fast
(using python float & complex classes) and I don't mind some rounding errors.

Imho something like

mpf = float
mpc = complex

should be enough, but it needs hooked up in the mpmath somehow.

Original issue reported on code.google.com by ondrej.c...@gmail.com on 24 Mar 2008 at 12:14

GoogleCodeExporter commented 9 years ago
Generally it will be better to use scipy for this. Few of the algorithms in 
mpmath
have an advantage unless they are working at increased precision.

Original comment by fredrik....@gmail.com on 24 Mar 2008 at 12:20

GoogleCodeExporter commented 9 years ago
It would be nice though. We're using gmpy for high precision, so why not use 
Python
float/complex for low precision?

Original comment by Vinzent.Steinberg@gmail.com on 19 Aug 2008 at 7:45

GoogleCodeExporter commented 9 years ago
In many cases, you can. For example, you can pass a pure float function to 
quadts. If
the integrand is very expensive to evaluate, and you are happy with low 
precision,
this is certainly worthwhile. But the quadrature weights are still mpfs (just 
to take
an example).

For the operations done by mpmath itself, I don't see optionally using floats as
being worth the trouble (there are maybe some exceptions). I also like the fact 
that
mpmath results are platform independent, even at low precision.

Original comment by fredrik....@gmail.com on 19 Aug 2008 at 8:25

GoogleCodeExporter commented 9 years ago
http://fredrik-j.blogspot.com/2009/01/jacobi-theta-function-fractals.html

"Arbitrary-precision arithmetic is really unnecessary for this (except at high 
zoom);
the Jacobi theta functions can be implemented quite easily with ordinary
floating-point arithmetic, and would be orders of magnitude faster as such."

I think this is a very good argument to support ordinary floats in mpmath. :)

It would be nice to have a force_type parameter for any mpmath function like 
for the
solvers in optimization.py. It should be trivial to implement. (Set
force_type=mpmathify by default and replace mpmathify with force_type.)

Original comment by Vinzent.Steinberg@gmail.com on 15 Jan 2009 at 7:32

GoogleCodeExporter commented 9 years ago
In fact, I'm now working on a substantial rewrite of mpmath that will be able to
seamlessly support Python numbers, gmpy numbers, Sage numbers, intervals, 
rationals,
or any other numerical type. A function is implemented using a high-level 
interface
that hides the details of the number type, and optionally speed-critical parts 
(i.e.
series summations) can be compiled to specialized code at runtime. The speedup 
for
Jacobi theta functions in particular when going from mpc to complex is about 
20x.

Original comment by fredrik....@gmail.com on 30 Mar 2009 at 2:27

GoogleCodeExporter commented 9 years ago
Very nice! I'm looking forward. Let me know in case you need help.

Original comment by Vinzent.Steinberg@gmail.com on 30 Mar 2009 at 6:38

GoogleCodeExporter commented 9 years ago
Yes, that's the way to go.

Original comment by ondrej.c...@gmail.com on 30 Mar 2009 at 7:25

GoogleCodeExporter commented 9 years ago
I'm currently toying with taking a piece of high-level Python code and 
compiling it
to perform precision management and similar cruft inline. For example it is
straightforward to create fixed-point code this way. The attached file, which
requires Python 2.6, computes exp(x) as a test. In the future the compiler 
could do
more advanced things like "while abs(x) > eps" <==> "while x_mag > -prec".

Assuming nice enough code, type inference turns out to be relatively 
straightforward
(certainly for this kind of code is it sufficient to support a static, 
procedural
subset of Python).

This might be overkill, but it's fun to see how far it can be taken. What I'm 
trying
to accomplish is a way to implement arithmetic-heavy code without the slowdown 
of
wrapper classes and without the syntactical inconvenience of wrapper functions 
(like
add(x,y)). Ultimately, of course, this has to be complemented with class methods
and/or generic functions to handle more advanced operations.

Original comment by fredrik....@gmail.com on 31 Mar 2009 at 2:32

Attachments:

GoogleCodeExporter commented 9 years ago
Nice idea, but I suggest to use Cython in pure Pytho mode for exactly this kind 
of
thing. That should be much faster than fixify.py.

Otherwise it's the same idea to take pure Python code and compile it. You 
compile it
to Python, while Cython compiles it to very fast C, if you help it with some
inference directives.

I plan to use the pure Python mode to speed up SymPy.

Original comment by ondrej.c...@gmail.com on 31 Mar 2009 at 5:42

GoogleCodeExporter commented 9 years ago
Though some functions still need to be added and tests need to be written, this 
is
now working.

Original comment by fredrik....@gmail.com on 15 Jan 2010 at 8:24