Turning news into events since 2014.
This system links a series of Python programs to convert the files which have been
downloaded by a web scraper to
coded event data which is uploaded to a web site designated in the config file.
The system processes a single day of information, but this can be derived from
multiple text files. The pipeline also implements a filter for source URLs as
defined by the keys in the source_keys.txt
file. These keys correspond to the
source
field in the MongoDB instance.
For more information please visit the documentation.
The pipeline requires either Petrarch or Petrarch2 to be installed. Both are Python programs and can be installed from Github using pip.
The pipeline assumes that stories are stored in a MongoDB in a particular format. This format is the one used by the OEDA news RSS scraper. See the code for details on it structures stories in the Mongo. Using this pipeline with differently formatted databases will require changing field names throughout the code. The pipeline also requires that stories have been parsed with Stanford CoreNLP. See the simple and stable way to do this, or the experimental distributed approach.
The pipeline requires one of two geocoding systems to be running: CLIFF-CLAVIN or Mordecai. For CLIFF, see a VM version here or a Docker container version here. For Mordecai, see the setup instructions here. The version of the pipeline deployed in production currently uses CLIFF/CLAVIN, but future development will focus on improvements to Mordecai.
The pipeline has two configuration files. PHOX_config.ini
specifies which
geolocation system to use, how to name the files produced by the pipeline, and
how to upload the files to a remote server if desired.
petr_config.ini
is the configuration file for Petrarch2 itself, including the
location of dictionaries, new actor extraction options, and the one-a-day filter. For
more details see the main Petrarch2 repo.
To run the program:
python pipeline.py