Gsunshine / Enjoy-Hamburger

[ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?
GNU General Public License v3.0
323 stars 20 forks source link
attention deq differentiable-programming implicit matrix-factorization optimization

Enjoy-Hamburger 🍔

Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021, top 3%)

Squirtle (憨憨) invites you to enjoy Hamburger! 憨 shares the same pronunciation as ham, which means simple and plain in Chinese.

Update

Introduction

This repo provides the official implementation of Hamburger for further research. We sincerely hope that this paper can bring you inspiration about the Attention Mechanism, especially how the low-rankness and the optimization-driven method can help model the so-called Global Information in deep learning. We also highlight Hamburger as a semi-implicit model and one-step gradient as an alternative for training both implicit and semi-implicit models.

We model the global context issue as a low-rank completion problem and show that its optimization algorithms can help design global information blocks. This paper then proposes a series of Hamburgers, in which we employ the optimization algorithms for solving MDs to factorize the input representations into sub-matrices and reconstruct a low-rank embedding. Hamburgers with different MDs can perform favorably against the popular global context module self-attention when carefully coping with gradients back-propagated through MDs.

contents

We are working on some exciting topics. Please wait for our new papers. :)

Enjoy Hamburger, please!

Organization

This section introduces the organization of this repo.

We strongly recommend our readers to enjoy the arXiv version or the blogs to more comprehensively understand this paper.

TODO:

Citation

If you find our work interesting or helpful to your research, please consider citing Hamburger. :)

@inproceedings{
    ham,
    title={Is Attention Better Than Matrix Decomposition?},
    author={Zhengyang Geng and Meng-Hao Guo and Hongxu Chen and Xia Li and Ke Wei and Zhouchen Lin},
    booktitle={International Conference on Learning Representations},
    year={2021},
}

Contact

Feel free to contact me if you have additional questions or have interests in collaboration. Please drop me an email at zhengyanggeng@gmail.com. Find me at Twitter or WeChat. Thank you!

Acknowledgments

Our research is supported with Cloud TPUs from Google's Tensorflow Research Cloud (TFRC). Nice and joyful experience with the TFRC program. Thank you!

We would like to sincerely thank MMSegmentation, EMANet, PyTorch-Encoding, YLG, and TF-GAN for their awesome released code.