jwkanggist / SSL-narratives-NLP-1

거꾸로 읽는 self-supervised learning in NLP
27 stars 2 forks source link

[4주차] Condenser: a Pre-training Architecture for Dense Retrieval #5

Open hekim3434 opened 2 years ago

hekim3434 commented 2 years ago

Keywords

bi-encoder, BERT, Attention behavior, Condenser

TL;DR

language model pre-trained with Condenser improves over large margin on various tasks even with simplified fine-tuning process.

Abstract

Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode text sequences such as sentences and passages into single dense vector representations for efficient text comparison and retrieval. However, dense encoders require a lot of data and sophisticated techniques to effectively train and suffer in low data situations. This paper finds a key reason is that standard LMs’ internal attention structure is not ready-to-use for dense encoders, which needs to aggregate text information into the dense representation. We propose to pre-train towards dense encoder with a novel Transformer architecture, Condenser, where LM prediction CONditions on DENSE Representation. Our experiments show Condenser improves over standard LM by large margins on various text retrieval and similarity tasks.

Paper link

https://aclanthology.org/2021.emnlp-main.75.pdf

Presentation link

https://drive.google.com/file/d/1VflGwn-jhiGEXCe2_BdmzkC0s_qAvV6f/view?usp=sharing

video link

https://www.youtube.com/watch?v=TLG6sJcsJxA&feature=youtu.be