JensGrabner / mpmath-2

Automatically exported from code.google.com/p/mpmath
Other
3 stars 0 forks source link

linear algebra #46

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
If you are going to implement some stuff for solving linear equations (as
you mentioned in your recent blog post), I could provide working (yet
somewhat messy) code to do the basic stuff like LU decomposition (this
includes solving linear systems and calculating the inverse/determinant
efficiently). Additionally I could share code for solving overdetermined
(and ordinary) linear systems via QR decomposition (LU decomposition is two
times faster, but less accurate).

Original issue reported on code.google.com by Vinzent.Steinberg@gmail.com on 2 Jul 2008 at 7:58

GoogleCodeExporter commented 9 years ago
Yes, I am interested. How does it differ to the algorithms that we have in 
SymPy?

Original comment by ondrej.c...@gmail.com on 2 Jul 2008 at 8:45

GoogleCodeExporter commented 9 years ago
All contributions are welcome.

Here is the code Felix sent me.

Original comment by fredrik....@gmail.com on 2 Jul 2008 at 8:56

Attachments:

GoogleCodeExporter commented 9 years ago
> Yes, I am interested. How does it differ to the algorithms that we have in 
SymPy?

Purely numerical algorithms can be optimized.

See the attachment for example. The customized, pure-Python mpf matrix 
multiplication
is about 1.5x faster than numpy matrices of mpf instances for 3x3 matrices, and 
5x
faster for 15x15 matrices. The customized version also calculates each element
exactly, without rounding error from temporary operations.

Original comment by fredrik....@gmail.com on 2 Jul 2008 at 9:59

Attachments:

GoogleCodeExporter commented 9 years ago
I see, yes, nice idea. 

How should we integrate this with SymPy matrices and also with numpy matrices, 
so
that one can convert one to the other easily? Or, to put it differently, if you
construct a SymPy matrix with symbols, then substutute those symbols with 
numbers,
it'd be nice if SymPy automatically chose the fastest algorithm for the given 
matrix.

Original comment by ondrej.c...@gmail.com on 2 Jul 2008 at 10:43

GoogleCodeExporter commented 9 years ago
Powerful. Fast. Simple. Pick two :)

Original comment by fredrik....@gmail.com on 2 Jul 2008 at 11:12

GoogleCodeExporter commented 9 years ago
> Yes, I am interested. How does it differ to the algorithms that we have in 
SymPy?

The difference is that my algorithms are intended for numerical stuff, but it 
should
work for sympy's Rational for example too (to do exact algebra). I wrote them 
for a
course in numerical analysis (this is the reason why I used nested lists 
instead of
numpy's array and that it's called LR decomposition instead of LU). The 
functions
beginning with num are exercises which were to be solved and provide some 
examples.
There is some code that compares the results with numpy.

Several differences to sympy if I recall it correctly:

* LU decomposition uses pivoting with implicit scaling, which means better 
accuracy 
  when using floats
* the decompositions overwrite the matrix and use thus less memory (it only 
matters
  for big systems, for which you should use numpy anyway)
* afaik sympy's QR decomposition is not able to solve overdetermined linear 
systems
* matrices are implemented via nested lists, so it should work with any object 
that
  supports standard numerical operations (some tweaks will be probably necessary to
  make it work with non floats)

Theretically it should work out of the box with mpf, but I didn't try yet.

Original comment by Vinzent.Steinberg@gmail.com on 3 Jul 2008 at 10:30

Attachments:

GoogleCodeExporter commented 9 years ago
If you want it for mpmath I'll refactor the code to use your matrix.py, to have 
unit
tests and to be more polished.
Is the matrix interface definitive?

Later I'll add Cholesky decompositon (for solving symmetric systems 
efficiently) and
iterative improving of solutions. I could implement basic iterative solving of 
linear
systems too.

Original comment by Vinzent.Steinberg@gmail.com on 19 Aug 2008 at 7:35

GoogleCodeExporter commented 9 years ago
That'd certainly be wonderful. As far as interface goes, I don't think there's 
much
to it. There should probably just be one all real and one all complex matrix 
type
(with automatic conversion from real to complex when any entry becomes complex).
Besides that, all that's needed is standard methods/functions.

From the implementation point of view, it might make sense to do only sparse 
matrices
(with dict instead of nested lists), since this doesn't cost much extra for 
dense
matrices. If it doesn't complicate things too much.

Original comment by fredrik....@gmail.com on 19 Aug 2008 at 8:34

GoogleCodeExporter commented 9 years ago
I've written a basic matrix class using a dictionary to represent the data. I 
expect
it to be faster than lists even for dense matrices, because dict's lookup via 
indices
is probably faster than list's due to their implementation.

There is nothing mpmath specific, the basic matrix should be as general as 
possible.

Original comment by Vinzent.Steinberg@gmail.com on 20 Aug 2008 at 1:50

Attachments:

GoogleCodeExporter commented 9 years ago
Finally I refactored the code to work with the new Matrix class. The examples 
are
partly broken and doctests for linalg.py and norms are missing, but it works 
fine.

There are no special Matrix types, it just works with the given data, let it be 
mpf,
float or Rational.

Original comment by Vinzent.Steinberg@gmail.com on 20 Aug 2008 at 3:31

Attachments:

GoogleCodeExporter commented 9 years ago
What is the advantage of this over numpy.matrix? Without the optimization of 
*not*
creating mpf instances, I think one might as well just provide eps-aware linalg
functions for numpy.matrix or numpy.array.

Original comment by fredrik....@gmail.com on 21 Aug 2008 at 9:42

GoogleCodeExporter commented 9 years ago
Has numpy arbitrary precision and pure python linear algebra? Which 
optimization do
you mean? I implemented matrix using dict, only non-zero values are actually 
stored.
Afaik numpy uses arrays.

Original comment by Vinzent.Steinberg@gmail.com on 21 Aug 2008 at 11:18

GoogleCodeExporter commented 9 years ago
You can use any Python objects, including mpmath numbers, in numpy arrays. The 
only
catch is that higher linear algebra functions assume the standard machine 
epsilon, so
they don't work well (if at all) for high precision. IMO it would make sense to 
first
write linear algebra functions that work with numpy arrays of mpmath numbers, 
and
then optionally write an optimized custom matrix class.

Storing just the _mpf_ tuples can give a ~2x speedup. It's a bit slower if you
iterate over the matrix "from outside", so BLAS operations need to be 
implemented
with the implementation in mind. (Numpy arrays have the same performance 
tradeoff;
iterating over a numpy array from Python builds new float instances.) 

See my matrix.py for how e.g. dot product can be optimized. It would be 
especially
interesting to see how optimized Gaussian elimination compares with a direct 
version.

Original comment by fredrik....@gmail.com on 21 Aug 2008 at 12:43

GoogleCodeExporter commented 9 years ago
What do you expect from linear algebra for mpmath? I was going to implement 
only the
basic stuff in mpmath as a Python-only solution which numpy cannot provide.
Optimizations are bells and whistles, which can be added later. First, it 
should just
work imho, to have a basis for further improvements. And after all you can 
implement
these optimizations much easier than me, as you know mpmath much better. For 
example
you could subclass from a slow, general matrix to create an optimized mpf 
matrix.

Original comment by Vinzent.Steinberg@gmail.com on 24 Aug 2008 at 5:08

GoogleCodeExporter commented 9 years ago
There is a question how to handle all the linear algebra things uniformly across
numpy/mpmath/sympy. E.g. clearly people would like to do linear algebra with 
floats,
mpf and symbolic objects and clearly different algorithms are the best in teach
option. So Imho the general case is the symbolic case, less general is mpf case 
and
the very specialized is the float case (numpy). 

And then there are pure python and python+cython (or C/fortran) versions of 
each.

Do you have any ideas how to make all of this to work together nicely?

Original comment by ondrej.c...@gmail.com on 24 Aug 2008 at 8:10

GoogleCodeExporter commented 9 years ago
I agree.

What I expect? I guess the feature I'm most interested in is multidimensional
equation solving. Matrix exponentials and the like would also be fun to have.

Original comment by fredrik....@gmail.com on 24 Aug 2008 at 8:13

GoogleCodeExporter commented 9 years ago
Any easy common interface are nested lists, which can be handled by all of them 
right
now.

I think mpmath should be able to do linear algebra without sympy. Of course I 
could
have taken sympy's Matrix, but I don't like the code to much, so I preferred to
rewrite it from scratch.

Linear multidimensional equation solving for determined or overdetermined 
systems is
implemented and working. The algorithms I used are optimized for floating point 
math,
so it's not redundant with sympy's solver, which uses exact rational 
arithmetic. QR
and implicit scaling would be senseless.

Nonlinear solving could be moved from sympy to mpmath, the algorithm could 
easily be
extended to solve overdetermined systems too. But it needs *close* starting 
points,
it's not as powerful as numpy's solvers.

I don't know matrix exponentials, but I could implement some more linear
multidimensional stuff (Cholesky, iterative methods...).

Are you against having different sympy and mpmath matrices?

> Do you have any ideas how to make all of this to work together nicely?

I'd implement it general enough to let the user choose and to be able to use
specialized functions or to fall back to the more general ones. I mentioned the
already existing common interface (not sure whether sympy's matrix has tolist).

Original comment by Vinzent.Steinberg@gmail.com on 25 Aug 2008 at 10:37

GoogleCodeExporter commented 9 years ago
matrix.py is relatively complete now (better support for vectors and more 
tests),
linalg.py still needs more work, especially unit tests for LR solving and a
high-level interface. The low-level QR solving is done including tests, however.

Do you think Matrix(2) should be an empty 2x2 matrix (current behaviour) or
Matrix([[2]]) (numpy's behaviour)? Other suggestions?

Original comment by Vinzent.Steinberg@gmail.com on 27 Aug 2008 at 4:20

Attachments:

GoogleCodeExporter commented 9 years ago
I believe we should be as numpy compatible as possible, unless we have a good 
reason
for being incompatible.

Original comment by ondrej.c...@gmail.com on 27 Aug 2008 at 4:27

GoogleCodeExporter commented 9 years ago
Well, personally I prefer the current behavior over numpy's, it's much more 
usefull
imho. You rarely want to have a scalar represented by a matrix, and the 4 extra
characters don't hurt either. For me this reason is good enough. :)

I don't care much about this though, so if compatibility is a better reason for 
you,
I'll change it. Please be concrete, are you +1 or -1?

(I'm not planning to implement all fancy numpy features, there is more important
stuff to do currently.)

It's possible to use *one* integer as index when the matrix is a vector (rows 
or cols
== 1). I think it's quite useful, but I don't think numpy's matrix is doing 
this. And
changing this would break the algorithms or need a special vector type. Keep it 
or
drop it?

Column vectors are created via Matrix([a1, ..., an]), row vectors via 
Matrix([[a1,
..., an]]) and matrices via Matrix(
[[a11, ..., a1n],
 [..., ..., ...],
 [am1, ..., amn]]).

Matrix(m, n) creates an empty m x n matrix.

Original comment by Vinzent.Steinberg@gmail.com on 27 Aug 2008 at 5:45

GoogleCodeExporter commented 9 years ago
Fredrik is the main maintainer of numpy, so he should say +1, or -1 if he's 
willing
to maintain it. :)

I don't have time to look at this in more details -- only I noticed you use too 
many
"print" statements, imho the tests should be done in the py.test way, just like 
all
the other tests.

Original comment by ondrej.c...@gmail.com on 27 Aug 2008 at 5:58

GoogleCodeExporter commented 9 years ago
numpy -> mpmath of course.

Original comment by ondrej.c...@gmail.com on 27 Aug 2008 at 5:58

GoogleCodeExporter commented 9 years ago
:) I was really wondering whether I missed something...

The print statements are from examples I had to do for a course, and they're 
just not
yet converted to unit tests... Don't worry, the unit tests don't use any print
statements, but thanks for pointing out.

Original comment by Vinzent.Steinberg@gmail.com on 27 Aug 2008 at 6:20

GoogleCodeExporter commented 9 years ago
I think it looks great; it just needs tests in proper format and mpmath 
precision
support. Matrix should probably mpmathify the inputs unless given a keyword arg
telling it not to.

I think Matrix(m) should be an error (too ambiguous), maybe also Matrix(m,n). 
There
should be zeros(m,n), ones(m,n), diag(v) for creating standard matrices.

I would prefer Matrix([1,2]) to be a column matrix (vector) rather than a row
(vector), though it's problematic to break compatibility here.

I prefer 'matrix' to 'Matrix'.

Original comment by fredrik....@gmail.com on 27 Aug 2008 at 6:31

GoogleCodeExporter commented 9 years ago
> I think it looks great; it just needs tests in proper format and mpmath 
precision
> support.

Yep, it's not yet finished. Precision is only needed to determine whether a 
matrix is
numerically singular, I'll trivially replace the hardcoded eps with mpmath's.

> Matrix should probably mpmathify the inputs unless given a keyword arg
> telling it not to.

Why? It's less flexible.

> I think Matrix(m) should be an error (too ambiguous), maybe also Matrix(m,n). 
There
> should be zeros(m,n), ones(m,n), diag(v) for creating standard matrices.

Ok, I'll change that. Should Matrix() be valid? Otherwise zeros would need to 
use lists.

> I would prefer Matrix([1,2]) to be a column matrix (vector) rather than a row
> (vector), though it's problematic to break compatibility here.

It's already done this way. It's a shortcut for Matrix([[1],[2]]).
Matrix([[1,2]]) would create a row vector.

> I prefer 'matrix' to 'Matrix'.

It's conventional [1] to use class names with an uppercase first character, so 
I did
it this way. But one could argue that data types are beginning lowercase (int,
numpy's matrix, mpf...), so I'll change it, you're the boss. :)

Thank you for the feedback!

[1] http://www.python.org/dev/peps/pep-0008/

Original comment by Vinzent.Steinberg@gmail.com on 27 Aug 2008 at 7:04

GoogleCodeExporter commented 9 years ago
> Why? It's less flexible.

It ensures that full precision is used.

> Ok, I'll change that. Should Matrix() be valid? Otherwise zeros would need to 
use
lists.

What would Matrix() return?

> It's already done this way.

Yes, only problem is that numpy doesn't do it this way (matrix([1,2,3]) is a 
row matrix).

> It's conventional [1] to use class names with an uppercase first character

Yep, but I think all lowercase is more consistent for an interface that is 
intended
to be used interactively.

Original comment by fredrik....@gmail.com on 27 Aug 2008 at 9:42

GoogleCodeExporter commented 9 years ago
> It ensures that full precision is used.

In which is this necessary? float + mpf = mpf with full precision?

> What would Matrix() return?

An empty matrix with undefined dimensions. BTW I think matrix(3, 2) is not 
ambiguous.
Integers are dimensions, lists are data. How can I implement zeros(m, n) without
using lists if lists are the only interface to matrix? Use a "hidden" 
interface? I
think it should be possible to create an empty matrix using __init__.

> Yes, only problem is that numpy doesn't do it this way (matrix([1,2,3]) is a 
row 
> matrix).

IIRC the numpy matrix interface is not final. There are some dissatisfied numpy
developers, aren't there? I'd be glad to break compatibility in this case for 
the
sake of usability. Vectors are usually columns.

Original comment by Vinzent.Steinberg@gmail.com on 28 Aug 2008 at 7:39

GoogleCodeExporter commented 9 years ago
> In which is this necessary? float + mpf = mpf with full precision?

So that if you do mp.dps=30; Matrix([[1.0]])/3, you get a 50 digit quotient.

> An empty matrix with undefined dimensions.

Where is this useful?

> I think matrix(3, 2) is not ambiguous.

No, probably not. It's fine with me.

> How can I implement zeros(m, n) without using lists if lists are the only 
> interface to matrix? Use a "hidden" interface?

I would probably write a hidden method.

> IIRC the numpy matrix interface is not final. There are some dissatisfied 
numpy
> developers, aren't there?

If that's true, then yes, I would like this to change in numpy.

Original comment by fredrik....@gmail.com on 28 Aug 2008 at 7:46

GoogleCodeExporter commented 9 years ago
> So that if you do mp.dps=30; Matrix([[1.0]])/3, you get a 50 digit quotient.

Err, well, I can't count, but you know what I mean :)

Original comment by fredrik....@gmail.com on 28 Aug 2008 at 7:59

GoogleCodeExporter commented 9 years ago
What about vectors represented by lists? Currently the algorithms are returning 
lists
instead of column vectors (for example the solution x to solve(A, b)). Should I
change this to use matrix or add some magic that matrices can be multiplied with
lists, assuming they're column vectors?

Original comment by Vinzent.Steinberg@gmail.com on 28 Aug 2008 at 8:23

GoogleCodeExporter commented 9 years ago
Feel free to try a few different approaches and see what seems more convenient.

For solve(A, b), I'd expect to be able to iterate over the solution as a regular
list, so maybe a list-like interface for column vectors is a good idea.

Original comment by fredrik....@gmail.com on 28 Aug 2008 at 8:43

GoogleCodeExporter commented 9 years ago
Already implemented for matrices with cols or rows == 1. :)

My implementation expected vectors to behave like lists (only for set/get/len, 
not
append/remove), so I had to implement it to get it working.

Original comment by Vinzent.Steinberg@gmail.com on 28 Aug 2008 at 9:35

GoogleCodeExporter commented 9 years ago
Now with less bugs and much more tests. The most useful part are the functions
lr_solve, qr_solve and inverse (these are the "high level" functions). lr_solve 
is
sadly broken atm, and qr_solve/lr_solve are missing some tests, but I'm too 
tired to
fix it now. :)

It's almost finished, mpmathifying of input and advanced matrix creation 
functions
are missing however. The the xx_solve functions accept overdetermined functions,
lr_solve is faster, qr_solve is more accurate and calculates the residual. 
inverse
calculates A^-1 for A, ehich is actually rarely needed.

Original comment by Vinzent.Steinberg@gmail.com on 28 Aug 2008 at 9:57

Attachments:

GoogleCodeExporter commented 9 years ago
I meant overdetermined systems, not functions, of course.

Original comment by Vinzent.Steinberg@gmail.com on 28 Aug 2008 at 9:58

GoogleCodeExporter commented 9 years ago
Bug fixes, tests, forcing mpf by default, other types can be specified 
(including None).

TODO:
* printing of matrices
* consistent output (always return vectors and not lists sometimes)
* diag, ones

The basic algorithms are finished, feel free to review.

Original comment by Vinzent.Steinberg@gmail.com on 29 Aug 2008 at 9:10

Attachments:

GoogleCodeExporter commented 9 years ago
Great :-) If you like, I'll give you svn access so you can commit directly when 
you
think it's ready.

A few comments:

matrix.py should be named something else to avoid confusion with the type. e.g.
matrices.py

deepcopy can be quite inefficient with mpfs. I think it's better to implement a
matrix.copy() method based on dict.copy() and use that when possible

I think norm_p should work when p = inf. norm_oo is not needed.

nthroot(x,p) is better than x**(1/p) (accuracy)

round(norm_p(x, 2), 10) == 14.0712472795 is dangerous; I suggest abs < eps or 
almosteq

I think matrix + scalar should be supported (adding scalar to each element)

matrix __div__ is missing

matrix __pow__ is missing (should use binary exponentiation for positive 
integers,
and matrix inverse for negative integers)

Some tests for *high precision* linear algebra are needed.

Original comment by fredrik....@gmail.com on 29 Aug 2008 at 6:25

GoogleCodeExporter commented 9 years ago
I added matrix construction stuff (ones, zeros, diag) and improved printing.

Current print behavior (note that repr is eval'able):

    6.74166    -1.60357    -6.68153    -6.41427 
          2    0.882272     1.09109    -4.14614 
          1    0.613809    0.816497     3.26599 

> matrix.py should be named something else to avoid confusion with the type. 
e.g.
> matrices.py

Done.

> deepcopy can be quite inefficient with mpfs. I think it's better to implement 
a
> matrix.copy() method based on dict.copy() and use that when possible

Done.

> I think norm_p should work when p = inf. norm_oo is not needed.

Done.

> nthroot(x,p) is better than x**(1/p) (accuracy)

>>> from mpmath import nthroot
------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython console>", line 1, in <module>
ImportError: cannot import name nthroot

> round(norm_p(x, 2), 10) == 14.0712472795 is dangerous; I suggest abs < eps or 
> almosteq

Why? It should be exact.

> I think matrix + scalar should be supported (adding scalar to each element)

Done. (__add__, __radd__, __sub__, __rsub__)

> matrix __div__ is missing

Done.

> matrix __pow__ is missing (should use binary exponentiation for positive 
integers,
> and matrix inverse for negative integers)

Shouldn't it only be possible for square matrices?

> Some tests for *high precision* linear algebra are needed.

High precision? Do you mean many digits? There are already some extremely
ill-conditioned systems.

Strange bug:

Traceback (most recent call last):
  File "matrices.py", line 464, in <module>
    test_matrix_basic()
  File "matrices.py", line 409, in test_matrix_basic
    assert (1 + ones(4)) / 2 - 1 == zeros(4)
TypeError: unsupported operand type(s) for /: 'matrix' and 'int'

WARNING: Failure executing file: <matrices.py>
>>> (1 + ones(4)) / 2 - 1 == zeros(4)
True

This one really confuses me, I'll look into it later, maybe there's something 
trivial
I just don't see atm. :)

> If you like, I'll give you svn access so you can commit directly when you
> think it's ready.

Thank you for the comments and the offer. I could commit it willingly, but at 
least
one person should review it before.

When this is in, it's trivial to move the multidimensional Newtonian solver to
mpmath. When using lr_solve, it will be able to solve even overdetermined 
systems.

Cholesky should be implemented, solving overdetermined systems with lr_solve 
would be
twice as efficient.

Original comment by Vinzent.Steinberg@gmail.com on 29 Aug 2008 at 8:21

Attachments:

GoogleCodeExporter commented 9 years ago
> Current print behavior (note that repr is eval'able):

Please add square brackets (like in SymPy)

> >>> from mpmath import nthroot
> ------------------------------------------------------------
> Traceback (most recent call last):
>   File "<ipython console>", line 1, in <module>
> ImportError: cannot import name nthroot

Works for me. Are you sure your version is up to date?

> Why? It should be exact.

Yes, but as e.g. issue 22 shows, rounding of floats isn't entirely reliable in 
Python.

> Shouldn't it only be possible for square matrices?

Yes. If not square, an exception should be raised.

> High precision? Do you mean many digits? There are already some extremely
ill-conditioned systems.

Yes, many digits. Just a basic sanity test; say, set mp.dps=50; create a random 
3x3
matrix and check that (inverse(inverse(A)) - A) < 1e-45.

Speaking of which, there should be a way to create a matrix with random entries.

> Strange bug:

This is probably because of importing division from __future__. You need to set
__truediv__ = __div__ and __rtruediv__ = __rdiv__.

Original comment by fredrik....@gmail.com on 29 Aug 2008 at 8:38

GoogleCodeExporter commented 9 years ago
> Thank you for the comments and the offer. I could commit it willingly, but at 
least
> one person should review it before.

You've been added. I'll be happy to review, of course.

Original comment by fredrik....@gmail.com on 29 Aug 2008 at 8:49

GoogleCodeExporter commented 9 years ago
Fixed all known bugs, added power for square matrices (positive exponent only), 
added
randMatrix, added your suggested precision test (it's actually < 1.e-49). 
Printing
looks this way:

[      70.060136       1.4721088       32.281234       24.490699       
78.756166 ]
[      8.0105533       19.472414       49.933242       89.557104       
66.919149 ]
[      76.133847       37.379937       99.695726       31.056993       
15.400508 ]
[      38.569169       71.765831       97.893696       47.519438       
59.699917 ]
[      58.173902       76.780653       68.862332        15.00056       
51.952324 ]

And for large/smaller numbers:

[ 7.0185131e-101  7.6571163e-101  3.4064237e-101  3.6204975e-101  
9.3532469e-101 ]
[ 2.5799866e-101  4.9964129e-101   8.456734e-101  3.4355103e-101  
2.0851709e-101 ]
[ 4.8129776e-101  2.9716847e-101   6.491503e-101  4.7447797e-101  
9.6740155e-101 ]
[ 8.9703985e-102    1.32653e-101  9.4671277e-101  2.1570651e-101  
1.5139611e-101 ]
[ 7.7285348e-102  5.0277383e-101  2.2541117e-102  3.2373658e-101  
2.7748112e-101 ]

Thank you for your helpful support. I think it's now nearly committable. The 
round
stuff is not fixed though.

Original comment by Vinzent.Steinberg@gmail.com on 29 Aug 2008 at 11:45

Attachments:

GoogleCodeExporter commented 9 years ago
I'll be away for a week, if you want you can commit the current code (just move 
the
tests to their dirctory). If not, just wait until I'm back. :)

Original comment by Vinzent.Steinberg@gmail.com on 30 Aug 2008 at 8:35

GoogleCodeExporter commented 9 years ago
Implemented Cholesky factorization and solving. This affects lr_solve for
overdetermined systems.

I'll add almosteq for matrices to remove round() from tests and prepare a patch 
soon.

Original comment by Vinzent.Steinberg@gmail.com on 5 Sep 2008 at 9:34

Attachments:

GoogleCodeExporter commented 9 years ago
To be honest, I don't like almosteq myself and never use it. Probably because I 
can't
remember how "almost" is supposed to be defined (absolute difference? relative
difference? either? both?) Maybe we should instead introduce the more explicit
functions abserr, relerr and use

    abserr(a, b) < 1e-10

or

    relerr(a, b) < 10*eps

?

Overloaded to work for matrices just as for numbers, of course.

Original comment by fredrik....@gmail.com on 5 Sep 2008 at 9:53

GoogleCodeExporter commented 9 years ago
What is abserr supposed to do? abs(a - b)? For matrices/vectors you would just 
use a
norm instead of abs.

Original comment by Vinzent.Steinberg@gmail.com on 6 Sep 2008 at 2:23

GoogleCodeExporter commented 9 years ago
Yep.

Original comment by fredrik....@gmail.com on 7 Sep 2008 at 11:24

GoogleCodeExporter commented 9 years ago
Finally I managed to get used to svn, here's the patch, please review.

Only the high level functions are imported via from mpmath import *.

Original comment by Vinzent.Steinberg@gmail.com on 9 Sep 2008 at 6:45

Attachments:

GoogleCodeExporter commented 9 years ago
Thanks. I'm busy now, so I'll look at it tomorrow or Friday.

Original comment by fredrik....@gmail.com on 10 Sep 2008 at 3:36

GoogleCodeExporter commented 9 years ago
I've played with it now and it looks very good overall.

force_type should use convert_lossless or mpnumeric instead of mpf, so that 
complex
matrices work automatically. Right now matrix([[1j]]) raises an exception. 
Ditto for
mpi (apparently convert_lossless doesn't recognize intervals, so it needs to be
patched for this to work).

It would be nice if you could do eye(2) * [2,3] (automatic conversion of list
operands to matrices).

The "from mpmath.module" imports should be changed to "from module". (Among 
other
reasons, this is required by SymPy.)

Matrix __str__ should look at the maximum used column width. E.g. print 
diag([1,2,3])
currently looks bad.

Matrix.__pow__ uses n multiplications instead of log2(n)

It might be worth sprinkling @extraprec(10) over some of the linalg functions. 
This
eliminates slight roundoff errors in basic cases, and although the effect is 
mostly
cosmetic, it does make life a bit more pleasant. Compare:

>>> lu_solve([[1,2],[3,4]], [5,6])
matrix(
[[mpf('-3.9999999999999987')],
 [mpf('4.4999999999999991')]])
>>> extraprec(10)(lambda: lu_solve([[1,2],[3,4]], [5,6]))()
matrix(
[[mpf('-4.0')],
 [mpf('4.5')]])

In addition to lu_solve and qr_solve, it would be useful with functions lu and 
qr
(equivalent to Matlab) to obtain the factorizations themselves.

This should work:
A = matrix([[1,2],[3,4]], force_type=mpi)
b = matrix([5,6], force_type=mpi)
lu_solve(A, b)

(And yes, the problem here is with mpi, not your code. Just including it here 
so I
don't forget about it :)

randMatrix should be renamed to randmatrix

Other than the imports and perhaps the force_type stuff, I think this is ready 
to go
in though, and all other issues can be fixed later.

Original comment by fredrik....@gmail.com on 12 Sep 2008 at 5:43

GoogleCodeExporter commented 9 years ago
> force_type should use convert_lossless or mpnumeric instead of mpf, so that 
complex
> matrices work automatically. Right now matrix([[1j]]) raises an exception. 
Ditto 
> for mpi (apparently convert_lossless doesn't recognize intervals, so it needs 
to be
> patched for this to work).

Ok, I didn't know this function. Raising an exception was intentional, you 
would use
force_type=mpc, but convert_lossless is a much better default. Done.

> It would be nice if you could do eye(2) * [2,3] (automatic conversion of list
> operands to matrices).

This is already TODO somewhere in the code.

> The "from mpmath.module" imports should be changed to "from module". (Among 
other
> reasons, this is required by SymPy.)

This is only used in tests (like already in test_ode.py for instance). 
matrix.py and
linalg.py don't import this way.
Or do you mean from mpmath import stuff too?

> Matrix __str__ should look at the maximum used column width. E.g. print 
> diag([1,2,3]) currently looks bad.

There was not much work spent on printing, it's currently intended to increase 
the
size of your terminal instead. I'm not sure whether this can be fixed using 
string
formatting, maybe more complex code (and thus work :) is needed.

> Matrix.__pow__ uses n multiplications instead of log2(n)

I don't understand.

> It might be worth sprinkling @extraprec(10) over some of the linalg 
functions. This
> eliminates slight roundoff errors in basic cases, and although the effect is 
mostly
> cosmetic, it does make life a bit more pleasant. Compare:
>
> >>> lu_solve([[1,2],[3,4]], [5,6])
> matrix(
> [[mpf('-3.9999999999999987')],
>  [mpf('4.4999999999999991')]])
> >>> extraprec(10)(lambda: lu_solve([[1,2],[3,4]], [5,6]))()
> matrix(
> [[mpf('-4.0')],
>  [mpf('4.5')]])

Nice idea, I think this is especially useful for residual, which currently has
cancellation errors which could be avoided this way. I'll use double precision 
there.
Done.

> In addition to lu_solve and qr_solve, it would be useful with functions lu 
and qr
> (equivalent to Matlab) to obtain the factorizations themselves.

They are there (LU_decomp and householder), but they are low-level and not 
imported
by default. householder() does not compute Q explicitly, so additional code is 
needed
here (it returns already the sufficient information to calculate Q, so this 
should
not be a problem).

> randMatrix should be renamed to randmatrix

I took the name from sympy. But I'm +1 for lowercase.
Done.

(Fixing issue 58 is included the patch.)

Original comment by Vinzent.Steinberg@gmail.com on 12 Sep 2008 at 9:15

Attachments:

GoogleCodeExporter commented 9 years ago
> Or do you mean from mpmath import stuff too?

Yes. This should be "from mptypes import ...".

> I don't understand.

A**n is computed this way, which requires n-1 multiplications:

    for i in xrange(other - 1):
        new *= self

But it should use repeated squaring, which only requires log2(n) 
multiplications; see
http://en.wikipedia.org/wiki/Exponentiation_by_squaring

> Nice idea, I think this is especially useful for residual, which currently has
> cancellation errors which could be avoided this way. I'll use double precision
there. Done.

Unfortunately, "@extraprec(mp.prec)" does not do what it should, because 
mp.prec is
only read off at import time. You can implement doubling precision with the 
following
pattern:

orig = mp.prec
try:
    mp.prec = orig*2
    ...
finally:
    mp.prec = orig

> They are there (LU_decomp and householder), but they are low-level and not 
imported
> by default.

Ok. I would appreciate if you could create high level wrappers.

Original comment by fredrik....@gmail.com on 12 Sep 2008 at 9:41