upiterbarg / mpmath

Automatically exported from code.google.com/p/mpmath
Other
0 stars 0 forks source link

Big changes: number representation, etc #138

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
I'm planning several extensive changes, motivated by a combination of
feature requests, realization of design mistakes, changed focus (today I'm
less interested in emulation of certain aspects of IEEE 754 arithmetic, for
example), changes in available tools (such as the availability of GMPY),
and the need to serve better as a library for SymPy and Sage (I'm by the
way being funded this summer to work on special functions for Sage via mpmath).

Here are my present plans. I'm of course very much interested in comments
and suggestions.

The mpf and mpc types will be removed. Instead there will be a single type
for representing both real and complex floats. In Python-mode, the value
will be represented by a tuple (real, imag) where a Python 0 in place of
either part represents a zero. This will eliminate much of the present
type-checking cruft (and also allow for a correct way to work with strictly
imaginary numbers without adding any additional complexity).

There will also be a type for representing rationals (both real and complex).

There will be a separate type for representing infinities and NaN.

The mpf value representation will be changed to just (man, exp) where man
can be signed; this should improve performance by reducing the amount of
data being shuffled around, given the new bit_length method in Python 2.7
(and 3.1) that will make recalculation of bc much less of a bottleneck in
Python mode. Avoiding redundant checks for inf/nan will also improve
performance.

The mpf_ and mpc_ layers will be slimmed down and used much less in favor
of more high-level code. This leads to a roughly 20-50% slowdown in
pure-Python mode in some places. However, this slowdown will be offset by
doing some things in C. The new floating-point type will easily be able to
optionally inherit from a C-defined base class (probably via GMPY, as Mario
has already done work on) that defines addition and multiplication and
other speed-critical methods.

SymPy should be able to implement a fast type for real and complex numbers
by inheriting directly from the floating-point type and overriding
appropriate fallback methods.

Special functions will use a combination of 1) high-level code, written so
as to support any number type (provided that a compatibility interface is
implemented -- in particular, such an interface will be implemented for
Python floats and complexes), and 2) low-level code for evaluating
generalized power series and the like.

Series evaluation will be implemented using a simple subset of Python that
can be compiled to Python. The compiler can just return generic code
(suitable for use with e.g. float/complex), or automatically create
fixed-point code (including replacing operations on a single complex
variable by fixed-point operations on two real variables). 

I'm now fairly confident about this approach: with the current version of
the compiler (on my hard drive) I am essentially able to generate all the
200 lines of code in _jacobi_theta_2 from a 50-line specification, and in
addition the same specification can generate fast code that evaluates
jtheta(2,z,q) for float/complex z,q.

The series compiler should also be able to insert code that checks for
cancellation and slow convergence (this is very important for
hypergeometric functions), without the need to write a lot of boilerplate
manually for each series. An intriguing possibility for the future is to go
a step further and compile series code directly to a sequence of GMP/MPIR
mpz_ operations.

Some particularly speed-critical special functions (e.g. exp, log, atan,
cos, sin) will of course retain implementations in entirely hand-written
low-level code for performance reasons.

Finally, I want to get rid of global state (mp.dps) to make mpmath easier
to use as a library and to avoid potential parallelization issues. On the
other hand, I still don't want numbers to store a prec attribute. A
possibility is to give floating-point numbers an attribute pointing to a
Context object that in turn defines precision. Then there can still be a
default global Context, but it also becomes easy to create local contexts.
Another possibility, which really is quite similar in practice, is to use
multiple classes or instances of the number type itself for varying
precisions (like Sage's variable-precision rings).

Original issue reported on code.google.com by fredrik....@gmail.com on 1 Apr 2009 at 1:21

GoogleCodeExporter commented 9 years ago
This sounds like a massive amount of work.

How much of mpmath will be broken (or have to be rewritten) due to the new 
number
representation?

(At least matrices.py, linalg.py and optimization.py should not need much 
porting.)

Original comment by Vinzent.Steinberg@gmail.com on 1 Apr 2009 at 7:09

GoogleCodeExporter commented 9 years ago
> How much of mpmath will be broken (or have to be rewritten) due to the new 
number
> representation?

> (At least matrices.py, linalg.py and optimization.py should not need much 
porting.)

There is a fair amount of low-level code that needs to be rewritten. However, 
in most
cases it is possible to just wrap the existing code and replace/modify it 
piecewise
later. For example, just editing the mpfunc and funcwrapper decorators should 
take
care of most of functions.py.

The high-level code uses a relatively small interface (for example 
quadrature.py only
imports 14 objects), so it shouldn't be affected much at all.

Original comment by fredrik....@gmail.com on 1 Apr 2009 at 7:31

GoogleCodeExporter commented 9 years ago
One option is to make prec an attribute of the floating-point class. As it used 
to
be, many versions ago -- one of the reasons for changing that was that it got 
messy
with the separate mpc class (but merging the types avoids this). Then one can
subclass in order to create numbers with local precision.

From my tests, merging classes gives speedups of order 1.5-2x for complex 
arithmetic
and negligible change in performance for real arithmetic (perhaps 10% slowdown).

Another thing I want to do is to eliminate interval arithmetic and implement a 
type
for (complex) significance arithmetic instead (ala SymPy.evalf). Done properly, 
this
can be just as rigorous as interval arithmetic; it's much faster, and the
implementation is simpler. A neater approach is then to implement interval 
arithmetic
on top of the significance arithmetic. Then it would also be possible to work 
with
intervals containing mixed approximate and exact (e.g. SymPy constants) 
endpoints,
which would be interesting.

Original comment by fredrik....@gmail.com on 2 Apr 2009 at 6:36

GoogleCodeExporter commented 9 years ago
That'd be cool, all looks good. Looking forward.

The only thing that slightly concerns me is the custom compiler thing. Imho if 
the
same thing can be achieve with Cython, then it'd be better. But if it can't, 
then it
can't.

Original comment by ondrej.c...@gmail.com on 2 Apr 2009 at 6:55

GoogleCodeExporter commented 9 years ago
> The only thing that slightly concerns me is the custom compiler thing.

It might be sufficient to use it for "offline" code generation. We'll see.

Anyway, I'm going to hold off on making any changes to the number types
in mpmath. To begin with, let's instead just introduce a context class:

class MultiPrecisionArithmetic(ArithmeticContext):
    ...
    def cos(ctx, x):
        ...

Then we change all (high-level) mpmath functions to methods of the
multiprecision context, create a global default instance and
global aliases:

    mpmath.mp = MultiPrecisionArithmetic()
    mpmath.mpmathify = mp.convert
    mpmath.mpf = mp.mpf
    mpmath.mpc = mp.mpc
    mpmath.cos = mp.cos
    mpmath.sin = mp.sin
    ...

This way there won't even be any compatibility problems with external
code (e.g. the syntax mp.dps remains).

After this is done, it should be straightforward to implement e.g.
FastMachinePrecisionArithmetic and start populating the main methods.
Soon any high-level code written written for the class MultiPrecisionArithmetic
should trivially work on top of FastMachinePrecisionArithmetic too.

Then it should be equally easy to start playing with alternative number
types without causing breakage by just wrapping them with suitable
ArithmeticContext classes.

Original comment by fredrik....@gmail.com on 5 Apr 2009 at 7:39

GoogleCodeExporter commented 9 years ago
Just a heads up: I've been too busy the last few weeks to do any coding. But I'm
working on this now.

Original comment by fredrik....@gmail.com on 24 Apr 2009 at 12:05

GoogleCodeExporter commented 9 years ago
Yep, the plan looks good to me.

Original comment by ondrej.c...@gmail.com on 24 Apr 2009 at 6:54

GoogleCodeExporter commented 9 years ago
Here is a preliminary patch that applies the changes to mptypes.py and 
functions.py.
Updating the rest of mpmath to use methods the same way should be quite 
straightforward.

It's now possible to do this:

>>> mp2 = mp.clone()
>>> mp2.dps = 50
>>> mp.exp(3.7)
mpf('40.447304360067399')
>>> mp2.exp(3.7)
mpf('40.447304360067397713778762769967774558238372250079618')

Also, most of the high-level functions in functions.py should now be relatively
simple to fix to support other number systems (e.g. float/complex).

Unfortunately, there are some precision-related doctest failures in functions.py

For example this happens when running functions.py:

Failed example:
    print beta(pi, e)
Expected:
    0.037890298781212201348153837138927165984170287886464
Got:
    0.037890298781212201348153837138927165984170287886466

However, I get the expected result when run standalone, or even when run in the 
same
session after the test:

>>> from mpmath import *
>>> mp.dps = 50
>>> print beta(pi, e)
0.037890298781212201348153837138927165984170287886464

So something fishing is going on and I can't see what.

Original comment by fredrik....@gmail.com on 24 Apr 2009 at 7:25

Attachments:

GoogleCodeExporter commented 9 years ago
I figured it out (that was a most painful debugging session). Turns out it's an 
old
bug in gamma, that only accidentally wasn't tested before. Trivially fixed. 
Turning
things outside out sometimes reveals interesting things :-)

Any objections to that patch? It's a bit rough, but the tests at least past. 
I'll do
further cleanup as a next step.

Vinzent, would you be willing to edit your code (matrices, solvers)? Otherwise 
I can
do it, of course.

Original comment by fredrik....@gmail.com on 24 Apr 2009 at 9:44

GoogleCodeExporter commented 9 years ago
I can't promise to do it soon, as my final exams are beginning this monday. The 
last
one is on May 19, but there are huge gaps between the exams, and I probably 
don't
want to spend all the time learning. :)

Are there many things to be fixed? I guess I have to look at your patch...

Original comment by Vinzent.Steinberg@gmail.com on 25 Apr 2009 at 6:38

GoogleCodeExporter commented 9 years ago
Just go through the code line by line and change functions -> methods.

A little work is needed for the matrix class; the class should be created at 
run time
and the class should be assigned a .context instance, in similar to fashion to 
mpf
(see mptypes.py and particularly MultiPrecisionArithmetic.__init__()).

E.g.:

class matrix:
    def __add__(self, other):
        other = mpmathify(other)
        ...

def lu_solve(a, b):
    a = matrix(a)
    b = matrix(b)
    ...

becomes:

class MultiPrecisionArithmetic:
    def __init__(self):
        ...
        self.matrix = type('matrix', (_matrix,), {})
        self.matrix.context = self

class _matrix:
    def __add__(self, other):
        other = self.context.convert(other)

def lu_solve(ctx, a, b):
    a = ctx.matrix(a)
    b = ctx.matrix(b)
    ...

MultiPrecisionArithmetic.lu_solve = lu_solve
lu_solve = MultiPrecisionArithmetic.default_instance.lu_solve   # mp.lu_solve

Some other minor tweaks are needed in places. For example lu_solve uses the
@extraprec(10) decorator, which won't work right now because extraprec is an 
instance
method of the mp object; it would have to be replaced with a different 
decorator.

But for the most part, it should be straightforward. See functions.py; I 
converted
about 1000 lines of code without any major difficulty in 2-3 hours.

Original comment by fredrik....@gmail.com on 25 Apr 2009 at 7:27

GoogleCodeExporter commented 9 years ago
A lot of extra versatility will be possible for certain things with only a 
little work:

* Instead of 'raise ValueError("failed to converge to target accuracy")', we 
can do
something like mp.failed_to_converge(result) and make it a setting whether to
raise/return/warn+return.

* Similar for debugging/verbose printing. There can be an mp.verbose attribute 
and
mp.log(msg), mp.warning(msg) methods, etc.

* We could make it possible to set custom defaults for certain routines (e.g. 
setting
a default solver)

* A simple function can be added for saving/loading the mp object to a file. 
This
could be used to store settings and cache data between interactive sessions.

Original comment by fredrik....@gmail.com on 30 Apr 2009 at 8:53

GoogleCodeExporter commented 9 years ago
These are the kinds of speedups I'm getting with a mixed real/complex 
floating-point
type, using the simplified representation. <R>, <I>, <C> denote a real, pure
imaginary, complex value, and the fraction is the speedup (larger is better) 
compared
to mpf and mpc.

bool(<R>)        0.92
bool(<I>)        1.23
bool(<C>)        1.20
(<R>).real       0.79
(<I>).real       1.06
(<C>).real       1.04
(<R>).imag       0.91
(<I>).imag       1.09
(<C>).imag       1.10
<R> < 0          2.99
<R> >= 0         2.94
<R> < <R>        1.33
<R> == <R>       1.61
<R> == <I>       2.50
<R> == <C>       2.26
<C> == <C>       24.98
<I> == <R>       6.63
<C> == <R>       9.83
int(<R>)         1.11
float(<R>)       0.92
complex(<R>)     0.99
complex(<I>)     1.03
complex(<C>)     1.03
abs(<R>)         1.37
abs(<I>)         1.95
abs(<C>)         0.92
-<R>             1.29
-<I>             1.89
-<C>             2.23
<R> + <R>        0.87
<R> + <I>        2.60
<R> + <C>        1.31
<C> + <C>        0.96
<I> + <R>        2.79
<C> + <R>        1.53
<R> + 1          0.82
<I> + 1          1.96
<C> + 1          1.34
<R> + 0.5        1.29
<I> + 0.5        2.73
<C> + 0.5        1.97
<R> + 0.5j       2.25
<I> + 0.5j       3.77
<C> + 0.5j       3.82
<R> + (1+0.5j)   1.57
<I> + (1+0.5j)   3.61
<C> + (1+0.5j)   3.21
<R> * <R>        1.03
<R> * <I>        1.80
<R> * <C>        1.59
<C> * <C>        1.68
<I> * <R>        8.60
<C> * <R>        6.91
<R> * 3          0.96
<I> * 3          1.62
<C> * 3          1.16
<R> * 0.5        1.34
<I> * 0.5        5.43
<C> * 0.5        4.93
<R> * 0.5j       1.89
<I> * 0.5j       6.20
<C> * 0.5j       5.75
<R> * (1+0.5j)   1.63
<I> * (1+0.5j)   5.48
<C> * (1+0.5j)   4.44

It's a pretty big win for mixed-type operations. Some of the real-only 
operations are
slightly slower; on the other hand, because formats and type conversions are 
simpler,
it should be much more straightforward to provide fully C(ython) versions of for
example __add__ and __mul__.

This is with gmpy's from_man_exp; pure Python timings might be a bit worse. Some
minor speedup was also gained via tweaks (such as avoiding rounding in __neg__) 
that
could be implemented for mpf/mpc too.

On a related note, I think 10%-50% speedups are possible for several of the
elementary functions (just via improvements to the pure Python code -- using 
Mario's
C code for the loops should then provide additional speedup). I'm working on 
this too.

Original comment by fredrik....@gmail.com on 7 May 2009 at 11:01

GoogleCodeExporter commented 9 years ago
The speedups are nice, but this one is a serious regression:

<R> + <R>        0.87

This is one of the most important floating point operations. But hopefully it 
can be
fixed. Any clue why it became slower?

Is a purely imaginary type really useful?

Original comment by Vinzent.Steinberg@gmail.com on 7 May 2009 at 3:20

GoogleCodeExporter commented 9 years ago
Not sure why it's so much slower. Inlining the addition code improves the 
figure to
0.95, but that's not necessarily worth the trouble. Crucially, though, fsum 
shouldn't
get any slower (it might even be possible to make a bit faster).

There's no separate imaginary type, just optimization for complex numbers with 
pure
imaginary value. This can be useful in some situations. A trivial example is 
that of
constructing a full complex number using an expression like x+j*y; it doesn't 
hurt if
this gets 2-3 times faster.

Original comment by fredrik....@gmail.com on 7 May 2009 at 4:25

GoogleCodeExporter commented 9 years ago
> class matrix:
>     def __add__(self, other):
>         other = mpmathify(other)
>         ...
>
> def lu_solve(a, b):
>     a = matrix(a)
>     b = matrix(b)
>     ...
> 
> becomes:
>
> class MultiPrecisionArithmetic:
>     def __init__(self):
>         ...
>         self.matrix = type('matrix', (_matrix,), {})
>         self.matrix.context = self
>
> class _matrix:
>     def __add__(self, other):
>        other = self.context.convert(other)

How to use "matrix" in "_matrix"?

>
> def lu_solve(ctx, a, b):
>     a = ctx.matrix(a)
>     b = ctx.matrix(b)
>     ...

What about methods that do not depend on the context? (swap_row(), extend(), 
...)

Original comment by Vinzent.Steinberg@gmail.com on 6 Jun 2009 at 10:06

GoogleCodeExporter commented 9 years ago
> How to use "matrix" in "_matrix"?

Maybe type(self) or self.context.matrix.

> What about methods that do not depend on the context? (swap_row(), extend(), 
...)

They do seem to depend on the context in that they check for the matrix class.

Original comment by fredrik....@gmail.com on 6 Jun 2009 at 10:41

GoogleCodeExporter commented 9 years ago
Concerning code structure, it should be a bit cleaner to define everything as 
mixin
classes and then use inheritance:

import functions
import matrices
import optimization
import quadrature
...

class MultiPrecisionArithmetic(functions.FunctionMethods, 
matrices.MatrixMethods,
optimization.OptimizationMethods, quadrature.QuadratureMethods):
    ...

By turning everything into a method, there won't be any dependency problems with
doing it "backwards" like this.

Original comment by fredrik....@gmail.com on 6 Jun 2009 at 10:48

GoogleCodeExporter commented 9 years ago
> They do seem to depend on the context in that they check for the matrix class.

Isn't the matrix class context-independent? It does not care about
precision/arithmetic etc. This might change if specialized code gets 
implemented for
speed.

Original comment by Vinzent.Steinberg@gmail.com on 6 Jun 2009 at 11:47

GoogleCodeExporter commented 9 years ago
The element types it uses are context-dependent.

The context-dependent matrix() constructor can control whether to convert 
elements to
a specific type, for example.

Original comment by fredrik....@gmail.com on 6 Jun 2009 at 12:29

GoogleCodeExporter commented 9 years ago
Should low-level functions (LU_decomp() etc.) also be methods of the arithmetic?

Original comment by Vinzent.Steinberg@gmail.com on 6 Jun 2009 at 12:31

GoogleCodeExporter commented 9 years ago
Do I need to implement a decorator like defun()?

Here is my work-in-progress.

Original comment by Vinzent.Steinberg@gmail.com on 6 Jun 2009 at 12:45

Attachments:

GoogleCodeExporter commented 9 years ago
Generally, you should define a decorator if it gets rid of repeated code.

Original comment by fredrik....@gmail.com on 6 Jun 2009 at 2:45

GoogleCodeExporter commented 9 years ago
I'm having this dependency problem right now:

>>> from mpmath import *
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "mpmath/__init__.py", line 3, in <module>
    from mptypes import (
  File "mpmath/mptypes.py", line 1695, in <module>
    mp = MultiPrecisionArithmetic()
  File "mpmath/mptypes.py", line 103, in __init__
    ctx.matrix = type('matrix', (_matrix,), {})
NameError: global name '_matrix' is not defined

The defun() decorator does not work if mp() is not present:

>>> from mpmath import *
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "mpmath/__init__.py", line 3, in <module>
    from mptypes import (
  File "mpmath/mptypes.py", line 1696, in <module>
    mp = MultiPrecisionArithmetic()
  File "mpmath/mptypes.py", line 103, in __init__
    from matrices import _matrix
  File "mpmath/matrices.py", line 614, in <module>
    @defun
  File "mpmath/mptypes.py", line 1682, in defun
    return getattr(MultiPrecisionArithmetic.default_instance, f.__name__)
AttributeError: type object 'MultiPrecisionArithmetic' has no attribute
'default_instance'

Don't know where to put "from matrices import _matrix"...

Original comment by Vinzent.Steinberg@gmail.com on 6 Jun 2009 at 3:57

Attachments:

GoogleCodeExporter commented 9 years ago
Hmm, not sure how to trivially work around that.

Do you want this patch in before the next release?

Original comment by fredrik....@gmail.com on 6 Jun 2009 at 4:45

GoogleCodeExporter commented 9 years ago
No, it's not that important and it might be better to release before, because 
it's
quite possible that some issues will arise. Is there anything important left to 
do
before a release?

Will it be possible to use FixedPrecisionArithmetic() without changing much of 
the code?

We should benchmark whether the added overhead (attribute lookup) significantly 
hits
performance. I think this could be nearly eliminated with temporary variables
avoiding the lookup inside loops.

Original comment by Vinzent.Steinberg@gmail.com on 6 Jun 2009 at 6:37

GoogleCodeExporter commented 9 years ago
> Is there anything important left to do before a release?

Not really; just going over the doctests and documentation.

> Will it be possible to use FixedPrecisionArithmetic() without changing much 
of the
code?

Yes, it should be.

> We should benchmark whether the added overhead (attribute lookup) 
significantly hits
> performance. I think this could be nearly eliminated with temporary variables
> avoiding the lookup inside loops.

The impact is small, and indeed one can use temporaries.

Original comment by fredrik....@gmail.com on 6 Jun 2009 at 6:50

GoogleCodeExporter commented 9 years ago
I converted quadrature.py to methods, which again was straightforward. This 
time I
did it as mentioned above, by putting everything in a QuadratureMethods class.
(Neatly avoids any issues of circular imports.) The QuadratureRule classes 
themselves
should perhaps be context-local... but that doesn't seem important to implement 
yet.

A small change is that the quadrature rule instances are not cached globally, 
but
locally with the context, which again is cleaner I think.

How about adding .save(file) and .load(file) methods to Context? It could 
probably
just use pickle. This could be used to save very high-precision quadrature 
nodes and
other expensive cached objects between sessions.

Original comment by fredrik....@gmail.com on 23 Jun 2009 at 1:16

GoogleCodeExporter commented 9 years ago
> How about adding .save(file) and .load(file) methods to Context?

Why not? I'd implement it more general (.save(stream)).

So I'll try to adapt your approach to matrices.py and linalg.py.

Original comment by Vinzent.Steinberg@gmail.com on 23 Jun 2009 at 1:29

GoogleCodeExporter commented 9 years ago
On a related note, I think we could move back to organizing the code into 
submodules
(I'm thinking /calculus/, /functions/, /matrices/, and /lib/ for all the low 
level
modules).

The reason this was a PITA was mainly due for the need to cross-import, and the 
lack
of relative imports in Python 2.4

Original comment by fredrik....@gmail.com on 23 Jun 2009 at 2:44

GoogleCodeExporter commented 9 years ago
s/submodules/subdirectories

Original comment by fredrik....@gmail.com on 23 Jun 2009 at 2:44

GoogleCodeExporter commented 9 years ago
> I'd implement it more general (.save(stream)).

Sure. On Windows, it'd be most convenient to also allow a file name string to 
be passed.

Perhaps if called with no arguments, it could save/load to a default
mpmath_saved_context file in some appropriate location (I think Python' standard
library provides functions to do this in a safe and cross platform way?)

> So I'll try to adapt your approach to matrices.py and linalg.py.

Great!

Original comment by fredrik....@gmail.com on 23 Jun 2009 at 2:45

GoogleCodeExporter commented 9 years ago
> I think we could move back to organizing the code into submodules

You'll have to convince Ondrej to drop support for Python 2.4.

Original comment by Vinzent.Steinberg@gmail.com on 23 Jun 2009 at 3:52

GoogleCodeExporter commented 9 years ago
A bit of an update. I'm reverting my position on moving infinities to a separate
class; it turns out not to simplify the implementation, and doesn't save much 
in the
way of performance. Instead I'm trying the representation (real_man, real_exp,
imag_man, imag_exp, special_flag) which can represent infinities and other 
values
cleanly.

Some preliminary benchmark results for arithmetic can be seen in the attached 
file.
I'm only benchmarking pure Python mode so far; gmpy mode is not fair because the
current helper functions in gmpy are optimized for the existing format. This is 
just
benchmarking the low-level functions, so when wrapped by a type, both speedups 
and
slowdowns will be slightly reduced.

Summary: real addition performance is unchanged (slightly faster in fact). Real
multiplication is slower, sometimes by over 25%. However, one selling point of 
this
format is that it becomes much easier to do exact real multiplications inline in
other code, and this is one of the reasons why complex multiplication and 
squaring
got so much (1.5x+) faster. (Dot products are another example that should be 
fast.)

Also, much of the slowdown that happened in real multiplication is due to the
slowness of bitcount() in pure Python, even though I actually inlined it here 
(using
gmpy numdigits intead gives a 10% or so improvement), so the situation should 
improve
with Python 2.7 where we get this builtin. I expect a uniform speedup in 
gmpy-mode
with its mpmath helper functions refitted to this format.

Since the primary effect of the format change is substantial code 
simplification, it
seems worth it.

Original comment by fredrik....@gmail.com on 9 Aug 2009 at 4:26

Attachments:

GoogleCodeExporter commented 9 years ago
The new backend now runs most of functions.py. This allows for some 
macro-benchmarks.
Trying out a couple of special function evaluations:

expr  -  evals/second (current mpmath)  -  evals/second  (new backend)  -  
speedup

lambertw(3.7)           -  1145 -  1228  (7% faster)
lambertw(3.7+4.3j)      -  350  -  515   (47% faster)
coulombf(3,2,3.7)       -  770  -  1330  (73% faster)
coulombf(3,2,3.7+4.3j)  -  613  -  914   (49% faster)
coulombf(3,2,37000)     -  256  -  291   (14% faster)
polylog(3,3.7)          -  465  -  555   (19% faster)
polylog(3,3.7+4.3j)     -  284  -  429   (50% faster)

The speedup in coulombf depends to a large extent on an improved core 
hypergeometric
series evaluation routine, while the other functions depend on general number
performance. Both lambertw and polylog of a complex argument suggest a ~50% 
speedup
for practical complex arithmetic, overheads accounted for.

Also, it's interesting to compare fsum and fdot performance. Both functions can 
be
implemented much better with the new representation, especially for complex 
arguments.

A = [mp.cos(n) for n in range(1000)]
B = [mp.sin(n) for n in range(1000)]
C = [a+b*j for (a,b) in zip(A,B)]
D = [b-a*j for (a,b) in zip(A,B)]

fsum(A)       -  475  -  569  (20% faster)
fsum(C)       -  194  -  408  (110% faster)
fdot(A, B)    -  189  -  273  (44% faster)
fdot(A, C)    -  78   -  196  (151% faster)
fdot(C, D)    -  30   -  124  (313% faster)

This should translate into a nice improvement for numerical integration and 
linear
algebra.

With the caveat that some corner cases still need to be fixed and will add some 
more
overhead.

There's a bit more work to do. Anything involving an infinity currently raises
NotImplementedError, so that still needs to be implemented (won't affect 
speed). Most
low-level transcendental functions are just wrappers of mpmath at this point. 
The
conversion overhead back and forth accounts for some slowdown. Fully 
reimplementing
exp/log/cos/sin and complex exponentiation should provide a bit more speedups.

All timings above are for the pure-Python mode. I'll compare with gmpy when
http://code.google.com/p/gmpy/issues/detail?id=33 is implemented.

I'll put up a public svn branch soon.

Original comment by fredrik....@gmail.com on 5 Sep 2009 at 7:50

GoogleCodeExporter commented 9 years ago
By the way, the code also contains an experimental fixed-precision context using
float/complex. It's missing a lot of base functions, but for example the 
Lambert W
function works and the performance numbers are
lambertw(3.7) -- 38444 and lambertw(3.7+4.3j) -- 19608 (so a nice 30x speedup 
for
this particular function). At this point the fixed-precision context is mostly
helpful because it helps to identify abstraction leakage, but it shouldn't be 
too
hard to make it fully functional.

I haven't given much thought to interval arithmetic. Perhaps implement interval
arithmetic in a separate context? (It would simplify matters to support only the
number type for the standard multiprecision context.)

Original comment by fredrik....@gmail.com on 5 Sep 2009 at 8:00

GoogleCodeExporter commented 9 years ago
All doctests for calculus.py now pass (except for 2-3 tests requiring matrix).

I should be able to finish this up this week.

Original comment by fredrik....@gmail.com on 7 Sep 2009 at 10:20