issues
search
modulabs
/
beyondBERT
11.5기의 beyondBERT의 토론 내용을 정리하는 repository입니다.
MIT License
60
stars
6
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
TinyBERT: Distilling BERT for Natural Language Understanding
#22
seopbo
closed
3 years ago
0
PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
#21
seopbo
closed
3 years ago
0
FastBERT: a Self-distilling BERT with Adaptive Inference Time
#20
seopbo
closed
3 years ago
1
ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation
#19
seopbo
closed
3 years ago
0
A Simple Language Model for Task-Oriented Dialogue
#18
seopbo
closed
3 years ago
0
ToD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogues
#17
seopbo
closed
3 years ago
0
Recipes for building an open-domain chatbot
#16
seopbo
closed
4 years ago
0
You Impress Me: Dialogue Generation via Mutual Persona Perception
#15
seopbo
closed
4 years ago
1
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
#14
seopbo
closed
4 years ago
9
Mask-Predict: Parallel Decoding of Conditional Masked Language Models
#13
seopbo
closed
4 years ago
7
Longformer: The Long-Document Transformer
#12
seopbo
closed
4 years ago
2
Reformer: The Efficient Transformer
#11
seopbo
closed
4 years ago
2
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
#10
seopbo
opened
4 years ago
0
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
#9
seopbo
opened
4 years ago
0
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
#8
seopbo
opened
4 years ago
0
Data Augmentation using Pre-trained Transformer Models
#7
seopbo
closed
4 years ago
2
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
#6
seopbo
closed
4 years ago
5
ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS
#5
datajuny
closed
4 years ago
9
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
#4
seopbo
closed
4 years ago
0
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
#3
diligejy
closed
4 years ago
9
How multilingual is Multilingual BERT?
#2
seopbo
closed
4 years ago
19
The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives
#1
seopbo
closed
4 years ago
14