Describe the bug 🐛
The "Training data-efficient image transformers & distillation through attention" link under "Stuff implemented so far:" section of the readme leads to the ViT paper and not the DeiT paper.
To Reproduce
Steps to reproduce the behavior:
Go to home page
Click on "Training data-efficient image transformers & distillation through attention"
Describe the bug 🐛 The "Training data-efficient image transformers & distillation through attention" link under "Stuff implemented so far:" section of the readme leads to the ViT paper and not the DeiT paper.
To Reproduce Steps to reproduce the behavior:
Expected behavior The expected link is: https://arxiv.org/pdf/2012.12877.pdf