This project will hold all the ETL code required to transform the raw HTML web pages into clean, normalized data that can be used for analysis.
You will need the following installed:
x
means any number) download here
Then click on the 'Fork' button above to make your own copy of the project, so that you run
git clone https://gitlab.com/yourUsername/ht-etl.git
where yourUsername
is your actual user name (e.g. for Bryant Menn it would
be bmenn
, yours may vary). The SSH protocol currently does NOT work with
git lfs
with these instructions.
Do NOT run the following:
git clone https://gitlab.com/anidata/ht-etl.git
We are going to use what is called a fork-merge model for git.
A sample of the raw can be accessing by using the lfs
plugin for git
.
Instructions to install the lfs
plugin can be found
here.
After installing the lfs
plugin, setup the plugin by running
git lfs install
and get the file with
git lfs fetch
pip install -e .
Pick an issue off the issue list and get started! If you need help just ping
the anidata1_1
slack channel for help.
When you're done hacking, run
git add .
git commit -m "An explanation of what you did goes here"
git push origin
And then open a merge request and make sure it's mentioned in the issue's comments.
ETL batch uses Luigi (http://luigi.readthedocs.io/en/stable/index.html) under the hood.
To configure Luigi, rename luigi.cfg.example
to luigi.cfg
and add the password to that file.
To run all the jobs excute:
luigi --module htetl.main_jobs LoadEntityIds --local-scheduler
Example is for running on local machine instead of Docker if you have trouble getting Docker to work
python_27
python_27
and python --version
should say it's 2.7.pip install -e C:\\your\\path\\to\\ht-etl
crawler_er.tar.gz
from (https://github.com/anidata/ht-archive)
crawler.sql
file.your_password
belowyour_password
In psql shell
localhost
crawler
(NOT postgres)5432
(Default Postgres port is 5432 - you can see the server's port in pgAdmin)postgres
your_password
crawler=#
CREATE ROLE dbadmin WITH SUPERUSER LOGIN PASSWORD '1234';
\i 'C:/your/path/to/crawler.sql';
In a Python 2.7 terminal (open it from Anaconda Navigator as described above)
cd C:\your\path\to\ht-etl
(otherwise it errors when it can't find things like data/flat_post.csv
)luigi --module htetl.main_jobs EmailsToPostgres --host localhost --database crawler --user postgres --password your_password --local-scheduler
If it worked, it should say something like this at the end
You should see two new files in your ht-etl/data
folder: flat_post.csv
and parsed_email.csv
You should see two new tables in the crawler database (in pgAdmin, right-click the server icon & "Refresh"): emailaddress
and table_updates
A strange error I encountered sometimes was "ValueError: need more than 1 value to unpack", originating in lock.py
.
I don't know why, but fixed it by deleting luigi's .pid files in C:\Users\Lukas\AppData\Local\Temp\luigi
requires()
method), so it runs (its run()
method).
It gets some columns from the database and makes the 1st CSV, as defined by its output()
method.run()
method.luigi.contrib.postgres
but I expect it works similarly.run()
, which eventually calls rows()
in LoadPostgres which loads the 2nd CSV file (via self.input())
and returns the appropriate generator (yielding a tuple for each row).
table
and columns
variables in EmailsToPostgres.table_updates
)emailaddress
and table_updates
table_updates
has to be deleted too.luigi --module htetl.main_jobs ParseEmails --host localhost --database crawler --user postgres --password your_password --local-scheduler