Closed omarsar closed 4 years ago
Why: Current self-supervised techniques for image data are complex, requiring significant modifications to the architecture or the training procedure, and have not seen widespread adoption. The authors outline a method that not only simplifies but also improves previous approaches to self-supervised representation learning on images.
A seminal work on pre-training of the cross-lingual model on different objectives [NIPS 2019]: Cross-lingual Language Model (XLM) Pretraining
Implicit Neural Representations with Periodic Activation Functions
SIREN outperforms all baselines by a significant margin, converges significantly faster, and is the only architecture that accurately represents the gradients of the signal, enabling its use to solve boundary value problems
Single Headed Attention RNN: Stop Thinking With Your Head Author: Stephen Merity https://arxiv.org/abs/1911.11423
Why: From @Smerity tweet Introducing the SHA-RNN
Read alternative history as a research genre Learn of the terrifying tokenization attack that leaves language models perplexed Get near SotA results on enwik8 in hours on a lone GPU No Sesame Street or Transformers allowed.
https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf Generative Pretraining from Pixels
https://www.youtube.com/watch?v=YBlNQK0Ao6g Image-GPT
They train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure.
Thanks for voting. The event has been announced,. Closing this issue now!
Suggest a paper you would like us to discuss during our weekly paper reading discussion. It can be a paper from RL, Computer Vision, NLP, or any ML related paper.
You can vote on a suggested paper by using the 👍 emoji. I will close the issue in one or two days and select the paper with the most votes. Then I will make the announcement at the beginning of the week. Thanks.