This README is for development of the APIs. Public documentation is located here.
DO NOT PUSH DIRECTLY TO MASTER UNLESS YOU INTEND TO DEPLOY A NEW VERSION OF THE SITE. THE SITE AND ALL APIS IS CONTINUOUSLY REDEPLOYED WHEN THE MASTER BRANCH IS UPDATED.
NOTE: At all times, the following should match the contributions page on our website (so update both simultaneously)!
The future of Brown APIs depends on you! All of our code is open source, and we rely heavily on contributions from the Brown community. You can view our code (along with open issues and future plans) on Github.
There are many ways to help further the development of Brown APIs. You can add new APIs, maintain and enhance current APIs, fix bugs, improve this website, or build better tools to help others contribute. Check the issues on our Github for suggestions of what to do first. You don't need to be able to code to help either. Reach out to CIS and other university organizations to get easier and wider access to campus data.
The APIs are written in Python and run on a Flask server. This website is also served by the same server and uses Jinja templates with the Bootstrap framework.
Data is stored in a single MongoDB database hosted on mLab.com (Note: This was probably a bad decision that could really use some contributions to fix!). Because there is only one copy of the database, developers must take care to avoid corrupting the data while testing fixes or new features.
You'll need the latest version of Python 3, along with virtualenv
and pip
. Go ahead and look up these programs if you aren't familiar with them. They're crucial to our development process.
git clone https://github.com/hackatbrown/brown-apis.git
virtualenv
online to understand what this does):
virtualenv -p `which python3` venv
source venv/bin/activate
pip install -r requirements.txt
git checkout -b <descriptive-branch-name>
origin/<branch-name>
, and open a new pull request.stage
branch. These changes will be automatically reflected on our staging server. You can merge changes from the develop branch into the staging branch with:
git checkout stage
git fetch origin
git reset --hard origin/master
git rebase <your-branch-name>
git push --force
master
once your pull request is accepted.
Your code will be run against flake8, a tool which will check for coding
style and common mistakes. You can run flake8
locally from within the virtual
environment. python3 -m api.scripts.<scriptname>
where 'scriptname' does NOT include the '.py' extension.We use MongoDB to store various menus and schedules, as well as client information. In MongoDB, all objects are stored as JSON, and there is no schema that forces all objects in a collection to share the same fields. Thus, we keep documentation of the different collections here (and in the API overviews below) to encourage an implicit schema. Objects added to the database should follow these templates. If you add a new collection to the database, remember to add a template here, too.
The Dining API is updated every day by a scraper that parses the menus from Brown Dining Services' website. The hours for each eatery are entered manually inside of the scraper script before each semester. When the scraper is run, all this data is stored in the database. Calls to the API trigger various queries to the database and fetch the scraped data.
The WiFi API just forwards requests to another API run by Brown CIS. Their API is protected by a password (HTTP Basic Auth) and is nearly identical to the WiFi API that we expose. The response from the CIS API is returned back to the client.
The Laundry API is updated manually with a scraper that pulls all the laundry rooms and stores them in the database. When a request is received, the API checks the request against the list of rooms in the database and optionally retrieves status information by scraping the laundry website in realtime.
washFL
, washNdry
, dry
)The Academic API used to scrape course information from Banner and store it in the database. Since Banner has been deprecated for course selection, the Academic API scraper has stopped working, and we are no longer able to collect course data. Thus, the Academic API is unavailable for the foreseeable future. Contributions are especially welcome here.