eg-nlp-community / nlp-reading-group

12 stars 0 forks source link

[22/05/2020] Friday 9pm GMT+2 Longformer: The Long-Document Transformer #13

Closed hadyelsahar closed 4 years ago

hadyelsahar commented 4 years ago

Next Friday @ibeltagy will be with us presenting his work on:

Longformer: The Long-Document Transformer

Iz Beltagy, Matthew E. Peters, Arman Cohan

paper: https://arxiv.org/abs/2004.05150 code : https://github.com/allenai/longformer

Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA.

hadyelsahar commented 4 years ago

This main merits of this work are two folds:

Both could allow training of LMs and masked LMs to use much larger contexts with a linear memory footprint w.r.t. sequence length and without huge comprise in performance.

Longformers can help reshaping many problems that were tackled using lossy hacks such as truncation in case of abstractive summarization, or GNNs for long doc question answering.

It would be amazing to see such work extended for multi-doc scenarios where one can design a custom attention with sliding windows within single doc and global attention over all docs.

From the interesting pointers from the discussions:

ibeltagy commented 4 years ago

Thanks, Hady for the summary. Quick notes:

layer_outputs = torch.utils.checkpoint.checkpoint(
                layer_module, hidden_states, attention_mask, head_mask[I],
                encoder_hidden_states, encoder_attention_mask
)  # might break because of weird `args` and `kwargs` issues but this is the general idea