allenai / deep_qa

A deep NLP library, based on Keras / tf, focused on question answering (but useful for other NLP too)
Apache License 2.0
404 stars 133 forks source link

Optimize AttentionSumReader layers #257

Closed nelson-liu closed 7 years ago

nelson-liu commented 7 years ago

While waiting for aristo-eng office hours, I decided to switch K.repeat_elements to K.tile in the OptionAttentionSum layer

nelson-liu commented 7 years ago

@matt-gardner, i turned this into a general PR for sprucing up the layers used in the attention sum reader. will do the same for the GAReader in a bit (maybe after doing some more search stuff)

matt-gardner commented 7 years ago

Looks like theano failed on a flaky test - can you decorate that test?

matt-gardner commented 7 years ago

And did we ever figure out if this is actually an optimization? If this is actually slower, we should probably not make the change.

matt-gardner commented 7 years ago

If you think it is indeed better, then feel free to merge.

nelson-liu commented 7 years ago

eh, i'm pretty indifferent. i think i like the semantics of using K.tile more, but it's still quite confusing as to why it would be slower.