Open matt-gardner opened 7 years ago
It looks like tf.einsum
might do the trick, at least for simple similarity functions. For more complicated ones, I'm not sure.
tf.matmul
works well for generic dot product based similarities. It's probably a lot faster since it'll call directly the optimized matrix routines.
The issue is that our similarity functions try to be fancy, letting you easily swap out different parameterized and non-parameterized functions when computing attentions. The trouble is that the way we make this easy is by taking a whole lot of memory. We need to re-think the API a bit.
I'm decreasing the priority of this, as the adaptive batch size and dynamic padding stuff makes this not too big of an issue anymore.
It'd still be a nice optimization, and would likely make runtimes faster, but it's not blocking anything anymore.
I'm not sure how this would work, really, but it takes a whole lot of memory to do it like we do it, tiling everything and then doing elementwise multiplication. There might be some way to make this work using some kind of
batch_dot
ordot
.