SSUHan / PaparReviews

8 stars 2 forks source link

[18.11.30] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding #6

Open jason9693 opened 5 years ago

jason9693 commented 5 years ago

BERT: Bidirectional Encoder Representation from Transformers.


2018-12-02 5 05 43
2018-12-02 5 21 30
2018-12-02 5 40 39
2018-12-02 5 05 53 2018-12-02 5 06 22

Dataset

SSUHan commented 5 years ago

Questions

jason9693 commented 5 years ago

@SSUHan 정확히는 Word를 임베딩 한것이 아니라, 'Word Piece Model'로 토크나이징한 'Word Piece'를 임베딩 한것입니다. Sentence Embedding에 대한 Baseline은 아직까지 못찾았습니다.