Closed hadyelsahar closed 4 years ago
This main merits of this work are two folds:
Both could allow training of LMs and masked LMs to use much larger contexts with a linear memory footprint w.r.t. sequence length and without huge comprise in performance.
Longformers can help reshaping many problems that were tackled using lossy hacks such as truncation in case of abstractive summarization, or GNNs for long doc question answering.
It would be amazing to see such work extended for multi-doc scenarios where one can design a custom attention with sliding windows within single doc and global attention over all docs.
From the interesting pointers from the discussions:
Blocksparse for faster sparse matrix multiplication, implemented in Tensorflow https://openai.com/blog/block-sparse-gpu-kernels/ discussions for pytorch implementation: https://github.com/openai/blocksparse/issues/2
Gradient checkpointing: https://github.com/cybertronai/gradient-checkpointing
Thanks, Hady for the summary. Quick notes:
The TVM code and the custom cuda kernel is currently only useful for the LM setting where dilation is important. All our pretrain/finetune experiments don't use it.
Blocksparse is not faster, it is just more general where you specify the sparsity pattern as input.
Gradient checkpointing is already implemented in PyTorch. To add support for it in the huggingface code, all you need is replacing this line with the following
layer_outputs = torch.utils.checkpoint.checkpoint(
layer_module, hidden_states, attention_mask, head_mask[I],
encoder_hidden_states, encoder_attention_mask
) # might break because of weird `args` and `kwargs` issues but this is the general idea
Next Friday @ibeltagy will be with us presenting his work on:
Longformer: The Long-Document Transformer
Iz Beltagy, Matthew E. Peters, Arman Cohan
paper: https://arxiv.org/abs/2004.05150 code : https://github.com/allenai/longformer