j6mes / nlpj2017-fnc-ensemble

Ensemble Classifier for fake news challenge 2017
Apache License 2.0
3 stars 3 forks source link

Baseline FNC implementation

Information about the fake news challenge can be found on FakeChallenge.org.

This repository contains code that reads the dataset, extracts some simple features, trains a cross-validated model and performs an evaluation on a hold-out set of data.

Credit:

Questions / Issues

Please raise questions in the slack group fakenewschallenge.slack.com

Getting Started

The FNC dataset is inlcuded as a submodule and can be FNC Dataset is included as a submodule. You should download the fnc-1 dataset by running the following commands. This places the fnc-1 dataset into the folder fnc-1/

git submodule init
git submodule update

Useful functions

dataset class

The dataset class reads the FNC-1 dataset and loads the stances and article bodies into two separate containers.

dataset = DataSet()

You can access these through the .stances and .articles variables

print("Total stances: " + str(len(dataset.stances)))
print("Total article bodies: " + str(len(dataset.articles)))

Hold-out set split

Data is split using the generate_hold_out_split() function. This function ensures that the article bodies between the training set are not present in the hold-out set. This accepts the following arguments. The body IDs are written to disk.

k-fold split

The training set is split into k folds using the kfold_split function. This reads the holdout/training split from the disk and generates it if the split is not present.

This returns 2 items: a array of arrays that contain the ids for stances for each fold, an array that contains the holdout stance IDs.

Getting headline/stance from IDs

The get_stances_for_folds function returns the stances from the original dataset. See fnc_kfold.py for example usage.

Scoring Your Classifier

The report_score function in utils/score.py is based off the original scorer provided in the FNC-1 dataset repository written by @bgalbraith.

report_score expects 2 parameters. A list of actual stances (i.e. from the dev dataset), and a list of predicted stances (i.e. what you classifier predicts on the dev dataset). In addition to computing the score, it will also print the score as a percentage of the max score given any set of gold-standard data (such as from a fold or from the hold-out set).

predicted = ['unrelated','discuss',...]
actual = [stance['Stance'] for stance in holdout_stances]

report_score(actual, predicted)

This will print a confusion matrix and a final score your classifier. We provide the scores for a classifier with a simple set of features which you should be able to match and eventually beat!

agree disagree discuss unrelated
agree 118 3 556 85
disagree 14 3 130 15
discuss 58 5 1527 210
unrelated 5 1 98 6794

Score: 3538.0 out of 4448.5 (79.53%)