yaoxingcheng / TLM

ICML'2022: NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
MIT License
257 stars 21 forks source link

NLP From Scratch Without Large-Scale Pretraining

This repository contains the code, pre-trained model checkpoints and collected datasets for our paper: NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework.

In our proposed framework, named TLM (task-driven language modeling), instead of training a language model over the entire general corpus and then finetuning it on task data, we first use task data as queries to retrieve a tiny subset of the general corpus, and then perform joint learning on both the task objective and self-supervised language modeling objective.

Requirements

We implement our models and training loops based on the opensource products from HuggingFace. The core denpencies of this repository are listed in requirements.txt, which can be installed through:

pip install -r requirements.txt

All our experiments are conducted on a node with 8 A100 40GB SXM gpus. Different computational devices may result in slightly different results from the reported ones.

Models and Datasets

We release the trained models on 8 tasks with 3 different scales, together with the task datasets and selected external data. Our released model checkpoints, datasets and the performance of each model for each task are listed in the following table. AGNews Hyp. Help. IMDB ACL. SciERC Chem. RCT
Small 93.74 93.53 70.54 93.08 69.84 80.51 81.99 86.99
Medium 93.96 94.05 70.90 93.97 72.37 81.88 83.24 87.28
Large 94.36 95.16 72.49 95.77 72.19 83.29 85.12 87.50

The released models and datasets are compatible with HuggingFace's Transformers and Datasets. We provide an example script to evaluate a model checkpoints on a certain task, run

bash example_scripts/evaluate.sh

To get the evaluation results for SciERC with a small-scale model.

Training

We provide two example scripts to train a model from scratch. To train a small-scale model for SciERC, run

bash example_scripts/train.sh && bash example_scripts/finetune.sh

Here example_scripts/train.sh corresponds to the first stage training where the external data ratio and MLM weight are non-zero, and example_scripts/finetune.sh corresponds to the second training stage where no external data or self-supervised loss can be perceived by the model.

Data Selection

We provide a python script in src/data_selection.py to perform data selection from a customized source dataset with queries from a customized target dataset.

To select data with the provided scripts, first download, install and start ElasticSearch by the default settings, then you can run

bash example_scripts/data_selection.sh

The above script retrieves sequences from an example source corpus which are similar to an example task dataset. Feel free to build inverted indices for your own corpus and select data for your own tasks.

Citation

Please cite our paper if you use TLM in your work:

@InProceedings{pmlr-v162-yao22c,
  title =    {{NLP} From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework},
  author =       {Yao, Xingcheng and Zheng, Yanan and Yang, Xiaocong and Yang, Zhilin},
  booktitle =    {Proceedings of the 39th International Conference on Machine Learning},
  pages =    {25438--25451},
  year =     {2022},
  editor =   {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
  volume =   {162},
  series =   {Proceedings of Machine Learning Research},
  month =    {17--23 Jul},
  publisher =    {PMLR},
  pdf =      {https://proceedings.mlr.press/v162/yao22c/yao22c.pdf},
  url =      {https://proceedings.mlr.press/v162/yao22c.html},
}