News summarization using sequence to sequence model in TensorFlow.
This repository is a demonstration of abstractive summarization of news article exploiting TensorFlow sequence to sequence model. This model incorporates attention mechanism and uses LSTM cell as both encoder and decoder.
This model is trained on one million Associated Press Worldstream news stories from English Gigaword second edition. The examples below are based on the model trained on AWS EC2 g2.2xlarge instance for 10 epochs, which took around 20 hours.
For more detailed information, please see our project research paper: Headline Generation Using Recurrent Neural Network.
News: A roadside bomb killed five people Thursday near a shelter used as a police recruiting center in northeast Baghdad, police said.
Actual headline: Iraqi police: Bomb kills 5 near police recruiting center in northeast Baghdad
Predicted headline: URGENT Explosion kills five people in Baghdad
News: The euro hit a record high against the dollar Monday in Asia as concerns over the U.S. subprime mortgage crisis remain a heavy weight on the greenback.
Actual headline: Euro hits record high versus dollar in Asian trading
Predicted headline: Euro hits record high against dollar
For demonstration, we use the sample file (a very small portion of English Gigaword) from LDC as our dataset to train our model. If you want to reproduce the results like the above examples, larger training set is necessary. You can download the trained model parameters which was trained on a larger portion on Gigaword by following the instructions in the Download vocabs and trained model parameters section below. The whole English Gigaword can be obtained from university libraries.
$ git clone https://github.com/hengluchang/deep-news-summarization.git
Install TensorFlow 0.12, pandas, Numpy, nltk, and requests
$ pip install -r requirements.txt
Create two folders named "working_dir" and "output" under the deep-news-summarization folder.
$ cd deep-news-summarization
$ mkdir -p working_dir output
$ python download_vocabs_and_trained_params.py ./working_dir
$ python split_data.py
$ python execute.py
$ python execute.py
$ python evaluation.py
$ python execute.py