Closed Vbansal21 closed 3 years ago
@Vbansal21 Hi Vaibhav! A group of us have dissected that paper, but it really didn't add any meaningful improvements on top of multihead attention (recurrent refinement between queries and keys, as well as the learned scaling)
I think it is only useful as a theoretical perspective. It has not added anything beneficial to actual practice and construction of transformers.
Thanks for the replyπ. Now I understand why my training didn't show any special improvements π. One more question though, what about PKMs?
@Vbansal21 PKMs is kind of a niche subject that i've only seen 2-3 papers for - if you want to greatly expand parameters and keep compute constant, i suggest looking at mixture of expert https://github.com/lucidrains/mixture-of-experts
Oh, so PKM is essentially a way to increase parameters without increasing the compute alot. Well then that won't be useful to me, cause I am trying to minimise parameters and compute.π Thanks for replying.
many of us are realizing that minimizing parameters is not the way to go - encourage you to read up on scaling laws
Thanks for the pro-tip.π And thanks for answering my queries, will be closing the issue. And for the parameters part, I am actually focusing on embedded inference on low compute device for general purpose (Perceiver IO for small devices) like SD QC 800s Intel Pentium/i3 etc.
@Vbansal21 oh ok, that makes sense then!
Hi, this x-transformers repo. is having alot of very useful features all at one place, though I was thinking if Modern hopfields may result in an increase in performance? The implementation is given here https://github.com/ml-jku/hopfield-layers Though I couldn't understand how to use it for memory purposes. What are your views about it? Are modern hopfields any useful as associative memory nets ? and if so, how should they be implemented? cause just adding them like lookup-layer didn't gave any special performance improvement.