dair-ai / ml-nlp-paper-discussions

📄 A repo containing notes and discussions for our weekly NLP/ML paper discussions.
149 stars 12 forks source link

Let's select a paper for June 27, 2020 #5

Closed omarsar closed 4 years ago

omarsar commented 4 years ago

Suggest a paper you would like us to discuss during our weekly paper reading discussion. It can be a paper from RL, Computer Vision, NLP, or any ML related paper.

You can vote on a suggested paper by using the 👍 emoji. I will close the issue in one or two days and select the paper with the most votes. Then I will make the announcement at the beginning of the week. Thanks.

msank00 commented 4 years ago

Why: Current self-supervised techniques for image data are complex, requiring significant modifications to the architecture or the training procedure, and have not seen widespread adoption. The authors outline a method that not only simplifies but also improves previous approaches to self-supervised representation learning on images.

kaushal0494 commented 4 years ago

A seminal work on pre-training of the cross-lingual model on different objectives [NIPS 2019]: Cross-lingual Language Model (XLM) Pretraining

respondgaurav commented 4 years ago

Implicit Neural Representations with Periodic Activation Functions

Website

SIREN outperforms all baselines by a significant margin, converges significantly faster, and is the only architecture that accurately represents the gradients of the signal, enabling its use to solve boundary value problems

manisnesan commented 4 years ago

Single Headed Attention RNN: Stop Thinking With Your Head Author: Stephen Merity https://arxiv.org/abs/1911.11423

Why: From @Smerity tweet Introducing the SHA-RNN

Read alternative history as a research genre Learn of the terrifying tokenization attack that leaves language models perplexed Get near SotA results on enwik8 in hours on a lone GPU No Sesame Street or Transformers allowed.

chunduri11 commented 4 years ago

https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf Generative Pretraining from Pixels

https://www.youtube.com/watch?v=YBlNQK0Ao6g Image-GPT

They train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure.

omarsar commented 4 years ago

Thanks for voting. The event has been announced,. Closing this issue now!