This project contains the code base to extract data from the wiki portal http://awoiaf.westeros.org.
|
|-src - main code base (python)
|--lib - application modules
|--sge - scripts to run jobs in parallel on a compute cluster
|-Data - downloaded data
To download the dependencies: set the correct PYTHONPATH, run . ./build.sh
from the root directory of this repository. NB: The first . is necessary to export the PYTHONPATH to your current bash session.
Important: it is necessary to export the PYTONPATH every time you wish to run these tools. To do so you can either run the build script every time, or jump to the configuration section below.
Alternatively follow the next steps:
$ easy_install nltk beautifulsoup4 requests
or
$ pip install nltk beautifulsoup4 requests
Next execute
$ python -m nltk.downloader punkt averaged_perceptron_tagger
NOTE : you may need to setup PYTHONPATH to include the path of installed modules if those were installed into non-default locations (for instance if you installed it into your user space).
You will need to set up the PYTHONPATH to reference the lib folder
# in bash
AWOIAF_ROOT=/path/to/awoiaf/
export PYTHONPATH="${PYTHONPATH}:${AWOIAF_ROOT}/src/lib"
The scrtips in the scr folder are used as the main drivers that build the data repository. You can look at those scripts as an entry point into the code. Here is a bried description for each script:
mineCharDetails.py - handles Ice and Fire charachters' data. scripts can be used to:
python mineCharDetails -l
obtain a list of charchater namespython mineCharDetails -c "Some One"
extract data from wiki entries dedicated to charachter Some One
.mineHousesDetails.py - handles data related to the great houses of Westeros. scripts can be used to:
python mineHousesDetails -l
obtain a list of all the houses names mentioned in the AWOIAF wikipython mineHousesDetails -s "House Name"
extracts data from wiki pages dedicated to House Name
.Look at the scripts in the scr folder to see how the modules in this app can be used and
As this project deals with processing 1000s of wiki pages it would make sense to use parallel processing to speed things up. If you have access to a compute cluster and to the sonofgrid scheduling system (formerly called SGE) then check the folder scr/sge for scripts and documentation on how to run paralllel jobs. If you want to schedule jobs using a different system (e.g. Hadoop YARN) then you will have to figure out how to do this yourself.