moble / quaternion

Add built-in support for quaternions to numpy
MIT License
606 stars 85 forks source link

quaternion matrix multiply #100

Closed meqiu1 closed 5 years ago

meqiu1 commented 5 years ago

Describe the solution you'd like I want to obtain the multiplication of two quaternion matrices. However, the multiplication is not for the quaternion matrices. Could you please add the modules of quaternion matrix multiply?

The codes are:

import numpy as np
import quaternion
aa = np.random.rand(2,2,4)
aa_q = np.mat(quaternion.from_float_array(aa))
bb=aa_q*aa_q

Here is the error:

return N.dot(self, asmatrix(other))
ValueError: dot not available for this type
moble commented 5 years ago

Just to be clear, are you sure you actually want matrix multiplication? (I can imagine that would be useful if you're using some other Clifford algebra.) If so, the unfortunate fact is that numpy has hard-coded a set of types that can work with dot, einsum, and tensordot, so there's apparently no built-in way for numpy to matrix multiply anything but int, float, and complex numbers (and closely related types). So you'll have to do it manually. If you're mostly using 2x2 matrices of quaternions, then you should probably just expand it by hand; something like this:

def matmul2x2quat(a, b):
    return np.array([
        a[0,0]*b[0,0]+a[1,0]*b[0,1], a[0,0]*b[1,0]+a[1,0]*b[1,1],
        a[0,1]*b[0,0]+a[1,1]*b[0,1], a[0,1]*b[1,0]+a[1,1]*b[1,1]
    ])

Otherwise, you should expand it generally to loop over the indices with something like

def matmulquat(a, b):
    if not a.shape[1] == b.shape[0]:
        raise ValueError('Wrong shapes: a.shape={0} and b.shape={1}'.format(a.shape, b.shape))
    c = np.empty((a.shape[0], b.shape[1]), dtype=a.dtype)
    for i in range(a.shape[0]):
        for j in range(b.shape[1]):
            c[i, j] = sum(a[i, k]*b[k, j] for k in range(a.shape[1]))
    return c

Note that I haven't checked that these are actually right, or thought about how they could be made faster.

But if what you want is not actually matrix multiplication, you can multiply arrays element-wise by just using numpy arrays (rather than matrices):

aa_q = quaternion.from_float_array(aa)
bb = aa_q*aa_q
meqiu1 commented 5 years ago

Thank you very much for your suggestions. The codes you have given here are useful for my project. I am interested in sparse representations for quaternion signals. I want to program the quaternion singular value deconposition algorithm. I have not found the relational modules. The modules of Clifford in github are for geometric algebra, but not for my study. Could you please give me some suggestions on quaternion matrix computations? Maybe I have to do them manually.

moble commented 5 years ago

If you're processing signals, I imagine you'll have at least thousands of samples you'll want to process, meaning you'll have matrices that are thousands squared at least. That's so large that you'll really want to use faster code. The dumb code that I showed you above will quickly become unbearably slow. So as I see it, there are two main ways you might want to think about going.

It's possible that you'll need to do more complicated things than I'm thinking of, which means you'll need more flexible code and more fine-grained control over what's going on. In that case, I would recommend using numba with standard numpy arrays of floats. You'd have to rewrite the quaternion multiplication/addition/etc. code yourself, but that's easy compared to the other stuff you'll have to do.

But before you get too deep into numba, I think there's a simpler approach that you should try. The basic idea is to use the representation of quaternions as 2x2 complex matrices. Multiplication and addition are just the usual matrix-multiplication and -addition functions, which numpy is very good at. So basically, this hides the quaternion multiplication inside sub-matrices of the matrix multiplication that you're doing anyway. This means using twice as much memory for your quaternions as in the simple cause, but it also means that you get to use numpy's BLAS capabilities. These include matrix-multiplication functions that are so much faster than what the rest of us could feasibly write that you will almost certainly get far faster results with this method despite the larger memory requirements.

In this case, the basic code you want is something like this:

def quaternion_matrix_to_complex_matrix(q):
    q = np.asarray(q, dtype=np.quaternion)
    assert q.ndim == 2
    c = np.empty((q.shape[0]*2, q.shape[1]*2), dtype=np.complex)
    q = q.view((np.complex, 2))
    c[::2, ::2] = q[..., 0]
    c[::2, 1::2] = q[..., 1]
    c[1::2, ::2] = -q[..., 1].conjugate()
    c[1::2, 1::2] = q[..., 0].conjugate()
    return c

def complex_matrix_to_quaternion_matrix(c):
    return np.ascontiguousarray(c[::2], dtype=np.complex).view(np.quaternion)

This first function converts a 2-dimensional array of quaternions into a 2-dimensional array of complex numbers in the matrix representation, and the second one converts it back to quaternions. So for example, I could create a couple of 1024x1024 quaternion matrices with some random numbers like this:

q1 = quaternion.as_quat_array(np.random.random((1024, 1024, 4)))
q2 = quaternion.as_quat_array(np.random.random((1024, 1024, 4)))

and then convert them to complex matrices like this:

c1 = quaternion_matrix_to_complex_matrix(q1)
c2 = quaternion_matrix_to_complex_matrix(q2)

You can multiply these matrices with a simple c1 @ c2. (Note that this is the fastest way to do matrix multiplication with modern python and numpy. And don't bother using np.mat, because it will go away eventually.) Assuming you've installed numpy with a reasonably modern installer (conda), it should automatically have good OpenBLAS or MKL integration, which means that matrix multiplication will be extremely fast. You can also convert the product back to quaternions with complex_matrix_to_quaternion_matrix(c1 @ c2). (Though your SVD algorithm might actually work better with the complex representation anyway.) This will be thousands of times faster than the matmulquat function I wrote above.

Again, I haven't tested these functions, so if you use them, you should write some basic tests to make sure they're right. For example, you might check them against a totally different way of doing the computation (like matmulquat) for relatively small samples. Even if the test is slow, at least you'll have confidence that the fast version is correct.

meqiu1 commented 5 years ago

Thank you very much for your suggestions. I will try to use the complex matrices for quaternion matrix computations.

moble commented 5 years ago

@meqiu1 I just ran across the julia package for quaternions, and noticed that there's a package here that specifically advertises the ability to do SVDs with quaternions. There's a related pull request, but one way or another, it looks like the code is readily available and working. This might be worth looking into, whether for just using it instead of python, using it directly from python, comparing to your results, or just ideas for how to do some programming.

meqiu1 commented 5 years ago

OK. I will try it. Thank you for your help.