lucidrains / x-transformers

A concise but complete full-attention transformer with a set of promising experimental features from various papers
MIT License
4.73k stars 405 forks source link

Hopfield Nets for memory purpose in x-transformers? #59

Closed Vbansal21 closed 3 years ago

Vbansal21 commented 3 years ago

Hi, this x-transformers repo. is having alot of very useful features all at one place, though I was thinking if Modern hopfields may result in an increase in performance? The implementation is given here https://github.com/ml-jku/hopfield-layers Though I couldn't understand how to use it for memory purposes. What are your views about it? Are modern hopfields any useful as associative memory nets ? and if so, how should they be implemented? cause just adding them like lookup-layer didn't gave any special performance improvement.

lucidrains commented 3 years ago

@Vbansal21 Hi Vaibhav! A group of us have dissected that paper, but it really didn't add any meaningful improvements on top of multihead attention (recurrent refinement between queries and keys, as well as the learned scaling)

I think it is only useful as a theoretical perspective. It has not added anything beneficial to actual practice and construction of transformers.

Vbansal21 commented 3 years ago

Thanks for the replyπŸ˜„. Now I understand why my training didn't show any special improvements πŸ˜‚. One more question though, what about PKMs?

lucidrains commented 3 years ago

@Vbansal21 PKMs is kind of a niche subject that i've only seen 2-3 papers for - if you want to greatly expand parameters and keep compute constant, i suggest looking at mixture of expert https://github.com/lucidrains/mixture-of-experts

Vbansal21 commented 3 years ago

Oh, so PKM is essentially a way to increase parameters without increasing the compute alot. Well then that won't be useful to me, cause I am trying to minimise parameters and compute.πŸ˜… Thanks for replying.

lucidrains commented 3 years ago

many of us are realizing that minimizing parameters is not the way to go - encourage you to read up on scaling laws

Vbansal21 commented 3 years ago

Thanks for the pro-tip.😎 And thanks for answering my queries, will be closing the issue. And for the parameters part, I am actually focusing on embedded inference on low compute device for general purpose (Perceiver IO for small devices) like SD QC 800s Intel Pentium/i3 etc.

lucidrains commented 3 years ago

@Vbansal21 oh ok, that makes sense then!