1) Changed!
매주 화요일 오후 10시30분(Flexible) 논문 1개씩 리뷰 with some materials(like PPT)
2) Discord 채널 - https://discord.gg/vVjq8XwqPz
3) Issues : comment로 materials 첨부(link를 이용해서) + 한 주간 읽었던 논문 중에 흥미롭거나 소개해주고 싶은거 간단 요약과 링크
4) Coding Assignments : 1) Backpropagation with Numpy 구현 2) Transformer 구현 3)Backjun coding problem 4) ETC.
5) 발표시간 15분 + QNA 5분
6) If you want to presentation in English, It is possible!
sanha
- Flow Network based Generative Models for Non-iterative Diverse Candidate Generationkeonwoo
- INTEGER QUANTIZATION FOR DEEP LEARNING INFERENCE: PRINCIPLES AND EMPIRICAL EVALUATIONingeol
- Multi-Task Deep Neural Networks for Natural Language Understanding, GPT2sanha
- Decision Transformer via Reinforcement learninghoyeon
- AutoAugment: Learning Augmentation Strategies from DataIngeol
- Transformer XLjuwon
- DCGANsanha
- Masked AutoEncoder(MAE)keonwoo
- Comprehensive survey about GNNhoyeon
- Neural Machine Translation by Jointly learning to Align and TranslateIngeol
- End-to-end Neural Coreference Resolutionjuwon
- U-nethoyeon
- Batch normalizationingeol
- BERT: Pre-training of Deep Bidirectional Transformers forLanguage Understandingsanha
- Improving Language Understanding by Generative Pre-Training (GPT1)juwon
- Fully Convolution Networksanha
- Visual prompting via image inpaintingkeonwoo
- Variational Inference: A Review for Statisticianshoyeon
- Sequence to Sequence Learning with Neural NetworksIngeol
- Few-shot learningsanha
- Generative Adversarial Imitation learning(GAIL)keonwoo
- difussion modelhoyeon
- Genarative adversarial networksanha
- A study of inverse reinforcement learning and its implementation keonwoo
- We need to know confidence of prediction on datasethoyeon
- Autoencoding Variationa Bayessanha
- Graphs Constraints and Search for the Abstraction and Reasoning Corpuskeonwoo
- Attention is all you need!sanha
- Stocastic prediction of multi agent interaction from partial observationskeonwoo
- A Tutorial on Bayesian Optimizationhoyeon
- Playing Atari with Deep Reinforcement Learningsanha
- object centric learning with slot attention