allenai / deep_qa

A deep NLP library, based on Keras / tf, focused on question answering (but useful for other NLP too)
Apache License 2.0
404 stars 133 forks source link

Avoid instantiating huge tensors as input to similarity functions #308

Open matt-gardner opened 7 years ago

matt-gardner commented 7 years ago

I'm not sure how this would work, really, but it takes a whole lot of memory to do it like we do it, tiling everything and then doing elementwise multiplication. There might be some way to make this work using some kind of batch_dot or dot.

matt-gardner commented 7 years ago

It looks like tf.einsum might do the trick, at least for simple similarity functions. For more complicated ones, I'm not sure.

matt-peters commented 7 years ago

tf.matmul works well for generic dot product based similarities. It's probably a lot faster since it'll call directly the optimized matrix routines.

matt-gardner commented 7 years ago

The issue is that our similarity functions try to be fancy, letting you easily swap out different parameterized and non-parameterized functions when computing attentions. The trouble is that the way we make this easy is by taking a whole lot of memory. We need to re-think the API a bit.

matt-gardner commented 7 years ago

I'm decreasing the priority of this, as the adaptive batch size and dynamic padding stuff makes this not too big of an issue anymore.

It'd still be a nice optimization, and would likely make runtimes faster, but it's not blocking anything anymore.