Open chriselrod opened 3 years ago
I'm assuming that here g
is the adjoint of f
?
Yes, in the example if f === tanh
then g(x) = x * muladd(-x, x, 1)
.
Although of course f
and g
would be arbitrary (so long as they're defined on numbers), it's just the primary use case and motivation would involve g = f'
.
I think this is a compelling use case for Octavian. While performance is close to MKL at smallish sizes, I think we'd have a nice advantage over the current dense layer implementation which looks like this on the forward pass (x
is the bias vector, normally called b
):
C = A * B
@. C = f(C .+ x)
Note that the broadcast is single threaded. I imagine we could get a nice performance advantage at the sizes people would actually consider CPU-training over MKL, and definitely over the default OpenBLAS through this fusion and threading the entire operation.
I think this is a compelling use case for Octavian. While performance is close to MKL at smallish sizes, I think we'd have a nice advantage over the current dense layer implementation which looks like this on the forward pass (
x
is the bias vector, normally calledb
):C = A * B @. C = f(C .+ x)
Note that the broadcast is single threaded. I imagine we could get a nice performance advantage at the sizes people would actually consider CPU-training over MKL, and definitely over the default OpenBLAS through this fusion and threading the entire operation.
This is really exciting!
Note that the broadcast is single threaded.
Two questions:
@avx
?Yes and yes.
The goal would be to just launch threads once per pass (i.e., once for forward, and once for back). This would reduce threading overhead, and also let us make sure locality is good / minimize how much data has to be move between cores.
This is for the same of implementing dense layers for neural networks.
The reverse pass also needs
g.(Cbar) * B'
andA' * g.(Cbar)
, but the fact that we have two instances ofg.(Cbar)
here means we should probably evaluateg
just once per element ofCbar
.However, perhaps somewhat related to https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/40, we should add support for batching matrix operations of different sizes as well -- in particular, the reverse pass of
should perhaps be evaluated with one function call that can minimize the (already low) threading and synchronization overhead.
Ideally, we'd have an API of doing this a little more generically, but it'd help for allocating threads to know that a lot of the array dimension here are the same.