Closed tshrjn closed 4 years ago
Title: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks Link: https://arxiv.org/abs/2005.11401
Title: A brief introduction to weakly supervised learning Paper: https://api.semanticscholar.org/CorpusID:44192968 Taxonomy: Modeling -> Training -> Weak Supervised Learning
Why: Modeling topic, literature review type, and practical in nature. Taxonomy is obtained from https://nlusense.com/v/32:18
Note: Proposing the paper I suggested earlier (6 votes)
Title: Language Models are Few-Shot Learners (GPT-3) Paper: https://arxiv.org/abs/2005.14165
Why: Relevant in the context of WS2 we discussed in the last reading group. What have we learned from the scale? Really impressive zero-shot performance, on a number of NLP tasks.
Title: Attention Is All You Need paper: https://arxiv.org/pdf/1706.03762.pdf
Title: A critical analysis of self-supervision, or what we can learn from a single image Paper: https://openreview.net/pdf?id=B1esx6EYvr
Title: Universal Adversarial Perturbations Paper: https://arxiv.org/pdf/1610.08401.pdf
Thank you all for suggesting papers and voting. It seems GPT-3 is the winner here. Feel free to suggest your papers again next weekend or bring new papers that you feel are exciting for the group.:)
I guess, we can start the voting process & Elvis Saravia can choose the most voted paper in few (2-3) days & make it official by announcement.
Comment a paper you would like us to discuss during our weekly paper reading discussion.
You can vote on a suggested paper by using the 👍 emoji. Thanks.