dougestey / toronto-city-hall-api

Powering municipal apps in Toronto.
5 stars 2 forks source link

Toronto City Hall API

Powering the future of open source municipal projects in and around Toronto.

Accessing the API

The API has not been publicly staged yet, but that's going to change in the near future.

Hacking on the API

Clone & build.

$ cd toronto-city-hall-api
$ npm install

Next, install Sails CLI (you can alternatively run these commands from your local project node_modules):

$ npm install -g sails

You have two options for data persistence: sails-disk (dev only) or sails-postgresql (prod/real life).

If you're using the former, skip straight to the lifting the sails.

If you're using SQL, configure the connection you wish to use in config/connections.js and update config/env/development.js to reference the postgres db.

If you'd like a fresh, empty db to hack on then skip this step and go straight to migrations - but if you'd like to bootstrap with the latest available snapshot, run the following:

$ psql your_db_name < db/init/toronto_2015-05-22.sql 

A user by the name of toronto_api_user will need access to your db (you can change this by replacing it with your desired username in the import file.)

Then install sails-migrations CLI and provision the database. It's important to do this even if you imported an SQL snapshot, as its schema may have changed since it was taken.

$ npm install -g sails-migrations
$ sails-migrations migrate

Temporary requirement at this stage is to a) follow the instructions below to set up your scraper or b) create a db called 'pupa' and populate it with a snapshot from db/init.

Finally, lift the sails:

$ sails lift

Updating against locally running scrapers

The API is designed to be compatible with the opencivicdata/scrapers-ca project - its models are mapped to tables generated by those scrapers, so if you're interested in updating that data you'll need to get one of them running.

Alternatively, if you're hacking on the API with little interest in current data, a snapshot of the scraper DB is provided in db/init.

Updating against third party data

The API is also capable of updating its local database against external data sources. These updates are scheduled through the Update service. Their schedules start running on server startup in config/bootstrap.js, but you can also invoke them manually by curling the routes available in config/routes.js.

You're going to want to specify the API keys you're using to access said third parties (if applicable.) Place them in keys/ and update api/services/Keys.js accordingly before attempting to run things in the Update service.

For example, if you wanted to get the latest ward data & geometry from a service like Namara.io, you would run:

$ curl http://localhost:1337/update/wards

Your server console will tell you how it goes. :)