JuliaLinearAlgebra / Octavian.jl

Multi-threaded BLAS-like library that provides pure Julia matrix multiplication
https://julialinearalgebra.github.io/Octavian.jl/stable/
Other
232 stars 18 forks source link

Generalize matrix multiply to support `C = f.(A * B + x)` #56

Open chriselrod opened 3 years ago

chriselrod commented 3 years ago

This is for the same of implementing dense layers for neural networks.

The reverse pass also needs g.(Cbar) * B' and A' * g.(Cbar), but the fact that we have two instances of g.(Cbar) here means we should probably evaluate g just once per element of Cbar.

However, perhaps somewhat related to https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/40, we should add support for batching matrix operations of different sizes as well -- in particular, the reverse pass of

gCbar = g.(Cbar)
Abar = gCbar * B'
Bbar = A' * gCbar

should perhaps be evaluated with one function call that can minimize the (already low) threading and synchronization overhead.

Ideally, we'd have an API of doing this a little more generically, but it'd help for allocating threads to know that a lot of the array dimension here are the same.

DilumAluthge commented 3 years ago

I'm assuming that here g is the adjoint of f?

chriselrod commented 3 years ago

Yes, in the example if f === tanh then g(x) = x * muladd(-x, x, 1).

Although of course f and g would be arbitrary (so long as they're defined on numbers), it's just the primary use case and motivation would involve g = f'.

I think this is a compelling use case for Octavian. While performance is close to MKL at smallish sizes, I think we'd have a nice advantage over the current dense layer implementation which looks like this on the forward pass (x is the bias vector, normally called b):

C = A * B
@. C = f(C .+ x)

Note that the broadcast is single threaded. I imagine we could get a nice performance advantage at the sizes people would actually consider CPU-training over MKL, and definitely over the default OpenBLAS through this fusion and threading the entire operation.

DilumAluthge commented 3 years ago

I think this is a compelling use case for Octavian. While performance is close to MKL at smallish sizes, I think we'd have a nice advantage over the current dense layer implementation which looks like this on the forward pass (x is the bias vector, normally called b):

C = A * B
@. C = f(C .+ x)

Note that the broadcast is single threaded. I imagine we could get a nice performance advantage at the sizes people would actually consider CPU-training over MKL, and definitely over the default OpenBLAS through this fusion and threading the entire operation.

This is really exciting!

DilumAluthge commented 3 years ago

Note that the broadcast is single threaded.

Two questions:

  1. Can we vectorize the broadcast by using @avx?
  2. Would it make sense to try to multi-thread the broadcast?
chriselrod commented 3 years ago

Yes and yes.

The goal would be to just launch threads once per pass (i.e., once for forward, and once for back). This would reduce threading overhead, and also let us make sure locality is good / minimize how much data has to be move between cores.