This project is no longer active. Live version used to live at: http://survey.internationalbudget.org (Archived version: https://web.archive.org/web/20220119021336/http://survey.internationalbudget.org/)
Developed in collaboration between the International Budget Partnership and the Open Knowledge Foundation. Written by Tom Rees, Hélène Durand, Tryggvi Björgvinsson, Damjan Velickovski, and Brook Elgie.
Explorer is the biggest part of the web application, representing most of the
endpoints, and is served from the root route - /
.
The explorer application is a static backbone app
(served through express), built using webpack. Its data is built up from static files stored in the ./data
directory. See below for more details.
In addition to the explorer and tracker applications, there's another small static app to serve the questionnaire review pages.
A page for each country in the survey is built, with questions and answers from the survey questionnaire, for ease of review. These can be accessed with a username and password at /questionnaires
. These pages are built, each time the app is deployed, from data defined in a .csv file hosted on Google Sheets.
The questionnaire data spreadsheet id, and the username and password are set as env vars as defined below.
The static pages are built using Metalsmith into /_build-questionnaires
and served as a static site from the central express app.
There is retired code for the the Tracker application in the tracker
directory that was concerned with the 'Document Availability' page, previously served at the /availability
endpoint. The functionality offered by this app has since been moved into the Explorer and is now available from #availability
.
The Tracker will be removed at some point in the future. Below is the previous description:
The Tracker app is concerned with the 'Document Availability' page and is served from the
/availability
route. It is an express app. Its data is retrieved during runtime from an external API using the separate ibp-explorer-data-client app.
Some of the installation instructions below concern the retired Tracker and can be ignored.
To run locally:
.env
.npm install
in the root directory of this repo to install dependencies.npm run build:dev
to bundle the front-end for the explorer, build the tracker, and a small sample of the questionnaire pages. If you want to watch for code changes use npm run build:dev:watch
. This will also start the server.
npm run build:dev:tracker
or npm run build:dev:tracker:watch
to do the same only for the tracker.npm run build:dev:explorer
or npm run build:dev:explorer:watch
to do the same only for the explorer.npm run build:questionnaires:dev
to build only the questionnaires.npm run start
to start the node server.To deploy:
ibp-explorer
.PORT
npm run build:prod
. This will build a minified version of the tracker, explorer, and all the questionnaire review pages.PORT
- port on which the server will listen. Default is 3000.TRACKER_LAST_UPDATE
- date to be displayed on the Availability page when the last API update occurredYou will need to set additional environment variables needed by ibp-explorer-data-client
API_BASE
- Base URL for the APIAPI_USERNAME
- Username for the APIAPI_PASSWORD
- Password for the APISERVICE_CREDENTIALS
- Google Service JSON token. You can do export SERVICE_CREDENTIALS=`cat <path_to_credentials.json>`
DRIVE_ROOT
- Which gdrive folder serves as root when searching for documentsAWS_ACCESS_KEY_ID
- Your access keyAWS_SECRET_ACCESS_KEY
- Your secret access keyAWS_REGION
- Region where the bucket isAWS_BUCKET
- Name of the bucket where to store snapshotsDRIVE_ROOT
- ID of the root where the documents should be searchedSPREADSHEET_ID
- ID of the spreadsheet where the found documents should be writtenQUESTIONNAIRE_AUTH
- username and password used to restricted access to questionnaire urls, in the form username:password
.QUESTIONNAIRE_SPREADSHEET_ID
- Google Sheets spreadsheet ID representing the questionnaire data source.To test:
npm run start
npm run test
All the data lives in the ./data
folder, along with a Python tool to Extract-Transform-Load it through a complicated data-massage. Outputs are:
./vendor/ibp_dataset.js
which is used by the javascript datatool../app/assets/downloads/
which is filled with downloadable files.To update the data:
./data
folder.To get those changes processed by the tool:
pip install -r requirements.txt
python etl.py
to update the tool.After generating new data from the ETL script:
THIS_YEAR
and INDIVIDUAL_YEARS
constants in explorer/util.js
with the lastest survey year.explorer/views/templates/download_files.hbs
needs to be updated manually. The client will supply files.Some basic tests for the python etl pipeline are provided in ./data/tests
. Run $ pytest
in the ./data
directory. These compare the etl output with expected data.
npm run extract-pot
to extract all the strings for translations into a .pot filenpm run merge-po
to merge the new strings for translation into the existing po filesnpm run compile-json
to compile the .po files to json message files which the app uses