unt-libraries / catalog-api

A Django project that lets you build and expose a REST API for your library's Sierra ILS-based catalog
BSD 3-Clause "New" or "Revised" License
17 stars 8 forks source link

Catalog API

About

The Catalog API (or catalog-api) is a Python Django project that provides a customizable REST API layer for Innovative Interfaces' Sierra ILS. This differs from the built-in Sierra API in a number of ways, not least of which is that the API design is fully under your control. In addition to a basic API implementation, a complete toolkit is provided that allows you to turn any of the data exposed via Sierra database views (and even data from other sources) into your own API resources.

Key Features

Project Structure

There are three directories in the project root: django, requirements, and solrconf.

The requirements directory simply contains pip requirements files for various environments — dev, production, and tests.

The django directory contains the Django project and related code, in django\sierra. The manage.py script for issuing Django commands is located here, along with the apps for the project:

The solrconf directory contains necessary Solr configuration — core configuration for Discover (our Blacklight app whose indexes are maintained via the Catalog API) and Haystack cores. This is designed so you can copy each core/conf directory to your Solr server.

Setting up Sierra Users

Before getting started, you should first take a moment to set up Sierra users. The catalog-api requires access to Sierra to export data, and you must create a new Sierra user for each instance of the project that will be running (e.g., for each dev version, for staging, for production). Be sure that each user has the Sierra SQL Access application assigned in the Sierra admin interface.

Installation and Getting Started, Docker

The recommended setup for development is to use Docker and Docker Compose to help automate building and managing the environment in which the catalog-api code runs. It is simpler to manage than the manual method and is thus well- suited for testing and development, but it is not meant for production deployment.

The repository contains configuration files for Docker (Dockerfile) and Docker Compose (docker-compose.yml) that define how to build, configure, and run catalog-api processes. All processes that comprise the catalog-api system, including databases, Solr, and Redis, run as services inside their own Docker containers. They remain isolated from your host system, except insofar as they,

  1. share a kernel, 2. use data volumes to store persistent data on the host, and 3. map container ports to host ports to expose running services. Running software in a container is otherwise similar to running it in a virtual machine.

If you are not familiar with Docker, we recommend that you at least run through the basic Docker Getting Started tutorials and Get Started with Docker Compose before you proceed. Understanding images, containers, and services is especially key to understanding how to troubleshoot.

Requirements

Setup Instructions

Install Docker and Docker Compose.

Clone the repository to your local machine.

Use the git clone command plus the appropriate URL to create a local copy of the repository. For instance, to clone from GitHub, using SSH, into a local catalog-api directory:

git clone git@github.com:unt-libraries/catalog-api.git catalog-api

Configure local settings.

For environment-specific settings, such as secrets and database connection details, you should create a .env settings file. Use the instructions included below.

Build the Docker environment(s).

In the repository root, you can run

./init-dockerdata.sh all

It will take several minutes to finish, but it should complete these steps:

However, when running the build in a CI environment, we've found it necessary to do an explicit pull, then a build, and then finally run the init script. So, if you get errors while running the init script as above, you may try doing this instead. E.g.:

./docker-compose.sh pull
./docker-compose.sh build
./init-dockerdata.sh all

(Optional) Run tests.

If you wish, you can try running Sierra database/model tests to make sure that Django is correctly reading from your production Sierra database.

You may also run unit tests.

Generate a new secret key for Django.

./docker-compose.sh run --rm manage-dev generate_secret_key

Copy/paste the new secret key into your SECRET_KEY environment variable.

Create a superuser account for Django.

./docker-compose.sh run --rm manage-dev createsuperuser

Go through the interactive setup. Remember your username and password, as you'll use this to log into the Django admin screen for the first time. (You can create additional users from there.)

Start the app.

There are two main Docker Compose services that you'll use during development: one to control the app (i.e., to run the Django web server) and one to control the Celery worker.

You can start them up individually like so:

./docker-compose.sh up -d app
./docker-compose.sh up -d celery-worker

Other services, such as your database, Solr, and Redis, are started automatically if they aren't already running.

Note that the -d flag runs these as background processes, but you can run them in the foreground (to write ouput to stdout) by ommitting the flag.

Check to make sure everything is up.

Check to make sure Sierra data exports work.

Follow the steps in this section to make sure you can export data from Sierra and view the results in the API.

Stop running services.

Whenever you're finished, you can stop all running catalog-api services and remove all containers with one command.

./docker-compose.sh down

Even though containers are removed, data stored on data volumes remains. Next time you start up the catalog-api services, your data should still be there.

More About the Docker Setup

Docker Compose Config File Version and Docker Swarm

Be aware that we have not tested our setup with a Docker swarm, even though the docker-compose.yml file does conform to the version 3 specification.

Running Docker Compose Services, docker-compose.sh

Nearly everything you'll need to do — building images, running containers, starting services, running tests, and running Django manage.py commands — is implemented as a Docker Compose service.

Normally you'd run services by issuing docker-compose commands. But for this project, you should use the provided shell script, docker-compose.sh, instead. This script simply loads environment variables from your .env settings file, effectively making those available to docker-compse.yml, before passing your arguments on to docker-compose.

In other words, instead of issuing a command like docker-compose run --rm test, you'd run ./docker-compose.sh run --rm test.

Data Volumes

We store persistent data, such as database data, within data volumes created on your host machine and mounted in the appropriate container(s).

For simplicity, all Docker volumes are created under a docker_data directory within the root repository directory. This directory is ignored in .gitignore.

Each service has its own subdirectory within docker_data. In some cases this subdirectory is the mount point (such as with PostGreSQL and MariaDB), and in other cases this directory contains additional children directories to separate things like logs from data (as with the Redis and Solr services).

The containers/services that run the catalog-api code mount the root catalog-api directory on your host machine as a data volume. Updating code locally on your host also updates it inside the running containers — so, you don't have to rebuild images with every code update.

Initializing Databases, init-dockerdata.sh

We've created an init-dockerdata.sh shell script to help make it easier to initialize Docker data volumes so that services will run correctly. Database migrations can be managed through this script, too. The setup instructions above use this script to initialize data volumes during first-time setup, but you can also use it to wipe out data for a particular service and start with a clean slate.

Run

./init-dockerdata.sh -h

for help and usage information.

Tests and Test Data

The catalog-api project has complex testing needs, and Docker provides an ideal way to meet those needs. Test instances of the default Django database, the Sierra database, Solr, and Redis are implemented as their own Docker Compose services. Running the test and manage-test services tie the catalog-api code into these test instances before running tests by invoking Django using the sierra.settings.test settings.

To ensure that tests run quickly, the test databases and some test data are stored in data volumes and can be initialized alongside the development databases using init-dockerdata.sh.

Building the catalog-api Image

The Dockerfile contains the custom build for the catalog-api services defined in docker-compose.yml: app, celery-worker, test, manage-test, and manage-dev. The first time you run any of these services, the image will be built, which may take a few minutes. Subsequently, running these services will use the cached image.

As mentioned above, changes to the catalog-api code do not require the image to be rebuilt. However, changes to installed requirements do. For example, if you need to update any of the requirements files, installed Python libraries will not be updated in your containers until you issue a docker-compose.sh build command. In other words, where you might otherwise run pip install to install a new library in your local environment, you'll instead update the requirements file with the new library name/version and then rebuild the image to include the new library.

Official images for all other services are pulled from Docker Cloud.

Installation and Getting Started, Non-Docker

Steps below outline the manual, non-Docker setup. If you're creating a production environment, these may be the basis for your setup but likely would not result in the exact configuration you'd want. They are geared more toward a pre-production environment, as they assume that everything will be installed on the same machine, which would not be the case in a production environment.

We do include tips for production configuration, where possible.

Requirements

Install prerequisites.

Python 3 >= 3.9

Personally I like using pyenv for installing and managing different Python versions. You can also install pyenv-virtualenv if you want to manage your virtualenvs using pyenv. Otherwise, you can just stick with the venv tool that's part of Python now.

Requirements for psycopg2

In order for psycopg2 to build correctly, you'll need to have the appropriate dev packages installed in your OS.

Ubuntu/Debian:

sudo apt-get install libpq-dev python-dev

Red Hat:

sudo yum install python-devel postgresql-devel

On Mac, with homebrew:

brew install postgresql
Redis

Redis is required to serve as a message broker for Celery. It's also used to store some application data. You can follow the getting started guide to get started, but please make sure to set up your redis.conf file appropriately.

Default Redis settings only save your data periodically, so you'll want to take a look at how Redis persistence works. I'd recommend RDB snapshots and AOF persistence, but you'll have to turn AOF on in your configuration file by setting appendonly yes. Note that if you store the dump.rdb and/or appendonly.aof files anywhere in the catalog-api project and you rename them, you'll need to add them to .gitignore.

Production Notes

The section Install Redis more properly in the "getting started" guide contains useful information for deploying Redis in a production environment.

You'll also want to be sure to take a look at the Securing Redis section. It's HIGHLY recommended that you at least have Redis behind a firewall and set a default user password for each of the two Redis instances that will be running.

To configure your default user, include a line such as this in your Redis .conf file.

user default on ~* &* +@all -@admin -@dangerous #PASSWORDHASH-SHA256

This line does a few things:

  1. It gives the default user access to all keys (~*) and all channels (&*) in that instance/database.
  2. It gives the user permission to use all commands (+@all) EXCEPT admin (-@admin) and dangerous (-@dangerous) ones.
  3. It sets the password for the default user. Since the conf file is stored in plain text, setting the SHA256 hash here is safer than storing the password, although you can store the password itself and just write-protect the file.

Redis ACL settings are complex; you can configure multiple users with various roles as needed. However, the Redis Python package does not yet have good support for users besides the default one.

Solr

Get Solr here. The Solr Reference Guide has instructions for installing Solr. If you're deploying for production, I'd recommend following the production deployment instructions instead, using the service installation script.

Once you've installed Solr, you must also install the necessary cores or collections using the provided configuration files in solrconf/discover-01/conf, solrconf/discover-02/conf, and solrconf/haystack/conf. Exactly how you do this depends on whether or not you are running Solr in SolrCloud mode. The configuration sections of the Solr Reference Guide can help you figure out where to put these.

Production Notes

In a production environment, you will want Solr running on its own server(s). Don't attempt to run it on the same server running the Catalog API. Solr architecture is a whole topic unto itself, but the Catalog API supports running a standalone Solr server or using a multi-server architecture.

When running Solr in standalone mode, you only need to configure the SOLR_PORT and SOLR_HOST environment variables.

But when running Solr on multiple servers, you can use the SOLR_*_URL_FOR_UPDATE and SOLR_*_URL_FOR_SEARCH environment variables to control what server the Catalog API sends index updates to and what server it searches. These could be URLs for a load-balancer that will forward your request to an aviailable node. Or, in a user-managed cluster, you might send updates to the leader and searches to a search-only follower.

Additionally, because the Catalog API is geared toward periodic batch updates instead of near real-time updates, if you're using user-managed replication, you may prefer that the Catalog API explicitly tell Solr to replicate ONLY after an update happens rather than having followers poll the leader needlessly. For this, set the SOLR_*_MANUAL_REPLICATION environment variables to True. This tells the Catalog API to trigger replication for that core on all followers whenever it commits to that core. I.e., when the utils.solr.commit function is called, it issues a call to the appropriate SOLR_*_MANUAL_REPLICATION_HANDLER (e.g., replication) for each of the SOLR_*_FOLLOWER_URLS after it sends the commit command to Solr. Note that, if you set this for a given core, you should disable polling in the replication handler (in solrconfig.xml) by removing the pollInterval setting.

Django Database

You'll need to have an RDBMS installed that you can use for the Django database. PostGreSQL or MySQL/MariaDB are recommended.

Set up a virtual environment.

You should contain your instance of the Catalog API project in a disposable virtual environment — never install development projects to your system Python. At this point, setting up a virtualenv is just standard operating procedure for any Python project.

If you aren't using pyenv-virtualenv, you can run, for example:

/path/to/py3/bin/python -m venv /path/to/new/venv

Now you can treat /path/to/new/venv like you have a copy of the Python from /path/to/py3. Run the new Python with /path/to/new/venv/bin/python. Install packages with /path/to/new/venv/bin/pip. It's fully self-contained and doesn't affect /path/to/py3 in any way.

See here for more information.

Clone the catalog-api to your local machine.

git clone git@content.library.unt.edu:catalog/catalog-api.git catalog-api

Install needed python requirements.

pip install -r requirements/requirements-base.txt \
            -r requirements/requirements-dev.txt \
            -r requirements/requirements-tests.txt \
            -r requirements/requirements-production.txt

Omit dev, tests, or production if not needed in a given environment.

Configure local settings.

Now you must set up a number of other environment-specific options, such as secrets connection details for databases, Redis, Solr, etc. Follow the instructions included below.

Generate a new secret key for Django.

cd catalog-api/django/sierra
python -m manage.py generate_secret_key

Copy/paste the new secret key into your SECRET_KEY environment variable.

Run migrations and install fixtures.

Make sure that your database server is up and running, and then:

cd catalog-api/django/sierra
python -m manage.py migrate

This creates the default Django database and populates certain tables with needed data.

Create a superuser account for Django.

cd catalog-api/django/sierra
python -m manage.py createsuperuser

Run through the interactive setup. Remember your username and password, as you'll use this to log into the Django admin screen for the first time. (You can create additional users from there.)

(Optional) Run tests.

If you wish, you can try running Sierra database/model tests to make sure that Django is reading correctly from your production Sierra database.

You may also try running unit tests, although setting these up locally without using Docker requires a bit of work.

Start services: Solr, Redis, Django Dev Server, and Celery.

All the services needed to run the Catalog API should now be installed and ready to go. You'll want to start each of these and have them running to use all features of the catalog-api software. (In the below instructions, replace the referenced environment variables with the actual values you're using, as needed.)

Production Note: In production you'll want all of these to run as daemons, e.g. systemd services. They should always be running and start up automatically when you reboot.

Your Django Database

If you've been following this guide, then you've already run migrations, so your Django Database should already be running.

Solr

When you installed and configured Solr, you should have set what port it is running on in the solr.in.sh file. Be sure this port matches the SOLR_PORT environment variable and that the correct SOLR_HOST is set on the machine where you're running the Catalog API.

Generally, you can start Solr using:

/path/to/solr/bin/solr start
Redis

We'll have two Redis processes running, each on a different port.

The first is for Celery:

/path/to/redis/redis-server /path/to/redis-celery.conf --port $REDIS_CELERY_PORT

The second is for the app data we need to store:

/path/to/redis/redis-server /path/to/redis-appdata.conf --port $REDIS_APPDATA_PORT

Make sure the REDIS_CELERY_PORT, and REDIS_APPDATA_PORT environment variables are configured appropriately. If you have default user passwords set in your conf files — and you should! — be sure that your REDIS_CELERY_PASSWORD and REDIS_APPDATA_PASSWORD environment variables are also set.

Django Development Web Server

For development, you can run Django using the built-in web server. This is absolutely not meant for production!

cd catalog-api/django/sierra
python -m manage.py runserver 127.0.0.1:$DJANGO_PORT

If you didn't set the $DJANGO_PORT environment variable, replace $DJANGO_PORT with 8000.

If all goes well, you should see something like this:

System check identified no issues (0 silenced).

February 10, 2023 - 11:40:40
Django version 3.2, using settings 'sierra.settings.my_dev'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Try going to http://localhost:DJANGO_PORT/api/v1/ in a browser. You should see a DJANGO REST Framework page displaying the API Root.

Production Note: For production, you must configure Django to work with a real web server, like Apache. See the Django documentation for more details.

Celery

This command will start a Celery worker server that can run our project tasks. Note that you must use -c 4 to limit concurrency to 4 simultaneous tasks — running more than four at once will run afoul of Sierra's limitation on simultaneous database connections per user.

cd catalog-api/django/sierra
/path/to/venv/bin/celery -A sierra worker -l info -c 4

When you run this, you'll get some INFO logs, as well as a UserWarning about not using the DEBUG setting in a production environment. Since this is development, it's nothing to worry about. You should get a final log entry with celery@hostname ready.

Celery Beat

Celery Beat is the task scheduler that's built into Celery. It's what lets you schedule your export jobs to run at certain times. In development you generally don't need this, it's mainly for scheduling production export jobs.

If you want to run it in development, use the following (while Celery is running).

cd catalog-api/django/sierra
/path/to/venv/bin/celery -A sierra beat -S django

You should see a brief summary of your Celery configuration, and then a couple of INFO log entries showing that Celery Beat has started.

Production Note: See the Celery documentation for how to set up periodic tasks. In our production environment, we use django-celery-beat and the Django DatabaseScheduler to store periodic-task definitions in the Django database. These are then editable in the Django Admin interface.

Convenience Scripts (Deprecated)

In the repository root we have some old shell scripts (start_servers.sh, stop_servers.sh, and start_celery.sh) for starting/stopping the needed catalog-api processes in a development environment, but these have not been updated in a long time and are considered deprecated. Really, use the Docker environment for development.

Check to make sure Sierra data exports work.

With all of your services running, follow the steps in this section to make sure you can export data from Sierra and view the results in the API.

Configuring Local Settings

You must configure local settings like database connection details for your instance of the catalog-api. Where possible, we provide usable default values, with simple ways of overriding them.

Django Settings

You'll find Django settings for the catalog-api project in catalog-api/django/sierra/sierra/settings. Here, the base.py module contains overall global settings and defaults. The dev.py, production.py, and test.py modules then import and override the base settings to provide defaults tailored for particular types of environments.

You can set a default settings file to use via a DJANGO_SETTINGS_MODULE environment variable. You can also specify a particular settings file when you run a catalog-api command or service through manage.py, using the --settings option.

In many cases it's perfectly reasonable to configure a Django project to run locally by changing or creating a Django settings file. However, we've set up the catalog-api to minimize this need by reading local settings from environment variables.

Under most circumstances, we recommend customizing your local environment by setting environment variables, not by changing the Django settings files. If you're running the catalog-api using Docker, then this is especially true (unless you're modifying the Docker configuration as well).

Environment Variables

Set these up using one or both of two methods:

These are not necessarily mutually exclusive. The set of variables defined in the .env file will automatically merge with the set of variables in the system environment, with system environment variables taking precedence if any are set in both places.

Production Note: Use the .env file in production. Then you don't have to mess with setting environment variables in whatever process is running your WSGI server (e.g., Apache mod_wsgi). Just be sure to protect it! If your WSGI process runs as capi:capi, chown the file to root:capi and chmod it to e.g. 0440.

Docker Notes

Configuring Environment Variables

First, take a look at the catalog-api/django/sierra/sierra/settings/.env.template file. This contains the complete set of environment variables used in the Django settings that you may configure. Most are optional, where the settings file configures a reasonable default if you do not set the environment variable. A few are required, where setting a default does not make sense. Some are needed only if you're deploying the project in a production environment. Note that many of these are things you want to keep secret.

Assuming you're setting all of the variables in your .env file, you'd copy catalog-api/django/sierra/sierra/settings/.env.template to catalog-api/django/sierra/sierra/settings/.env. Update the variables you want to update, and remove the ones you want to remove.

Required Settings

Your settings file won't load without these.

When using the Docker setup, the default Django DB is created for you automatically using the username and password you have in the DEFAULT_ environment variables. If not using the Docker setup, you must set up that database yourself.

These last two variables are required only if you're not using the Docker setup. In Docker, these are relative to the container and are overridden in the Dockerfile. Outside Docker, they're of course relative to your filesystem.

Optional Settings, Development or Production

These are settings you may need to set in a development or production environment, depending on circumstances. If the variable is not set, the default value is used.

Production Settings

These are settings you'll probably only need to set in production. If your development environment is very different than the default setup, then you may need to set these there as well.

The four remaining variables are DEFAULT_DB_ENGINE, DEFAULT_DB_NAME, DEFAULT_DB_HOST, and DEFAULT_DB_PORT. These, along with the DEFAULT_DB_USER and DEFAULT_DB_PASSWORD, configure the default Django database. Because the Docker setup is now the recommended development setup, this defaults to using MySQL or MariaDB, running on 127.0.0.1:3306.

Test Settings

The .env.template file includes a section for test settings. These define configuration for test copies of the default database, the Sierra database, Solr, and Redis. The variables prefixed with TEST correspond directly with non-test settings (ones not prefixed with TEST).

If you will be running tests through Docker, then the only required settings are TEST_SIERRA_DB_USER, TEST_SIERRA_DB_PASSWORD, TEST_DEFAULT_DB_USER, and TEST_DEFAULT_DB_PASSWORD. Test databases will be created for you automatically with these usernames/passwords.

If running tests outside of Docker, then you will have to configure all of these test instances manually and include full configuration details in your environment variables.

Docker Note: If you're using Docker, you should note that all of the HOST and PORT settings (except those associated with the live Sierra database) define how services running in Docker containers map to your host machine. For example, if SOLR_HOST is 127.0.0.1 and SOLR_PORT is 8983, then when you're running the solr-dev service via Docker Compose, you can access the Solr admin screen from your host machine on http://localhost:8983/solr/. The default settings are designed to expose all services locally on the host machine, including test services, without raising port conflicts.

Docker-Compose-Only Settings

The very last section of the .env.template file contains settings that are only used by the Docker setup, for testing and/or development.

Here you can (optionally) define version information for external components. Define what images Docker uses for Python (DOCKER_PYTHON_IMAGE), MySQL or MariaDB (DOCKER_MYSQL_IMAGE), Postgres (DOCKER_POSTGRES_IMAGE), Solr (DOCKER_SOLR_IMAGE), and Redis (DOCKER_REDIS_IMAGE). Also define the luceneMatchVersion for each of your indexes on each core (DOCKER_SOLR_HAYSTACK_LUCENE_VERSION, DOCKER_SOLR_DISCOVER01_LUCENE_VERSION, and DOCKER_SOLR_DISCOVER02_LUCENE_VERSION). Note that each new major or minor Solr version implements a new Lucene version, but versions starting with the previous major version of Solr will work. So, Lucene version 8.0 and above should work with Solr 9.X. Updating the Lucene version requires reindexing.

Defaults for all of these settings are defined in /docker-compose.env. Defaults represent the minimum tested versions. The reason these can be set locally is so that you can more easily test or develop against whatever versions you're using.

Important: Many of these settings affect your built Docker environment, and many affect your Docker data. When you change them, prepare to get rid of any dev data that you may have. A good rule-of-thumb is to rebuild your Docker environment and reinitialize Docker data when you change any of these.

./docker-compose.sh build
./init-dockerdata.sh -f all

You can of course hold off on reinitializing data if (for example) you're testing an upgrade and want to see what happens to existing data. This is especially useful for Solr. E.g., index some data using your current production version, then leave the LUCENE_VERSION alone but bump the DOCKER_SOLR_IMAGE version. Rebuild without reinitializing the data, spin up a new dev instance, and try it out.

Testing

Running Sierra Database Checks

Early in development we implemented a series of tests using the built-in Django test runner to do some simple sanity-checking to make sure the Django ORM models for Sierra match the structures actually in the production database. We have since converted these to run via pytest: see django/sierra/base/tests/test_database.py.

When you run the full test suite, as described below, these run against the test Sierra database — which is useful. But, there are times that you'll want to run these tests against your live database to make sure the models are accurate. For instance, systems may differ from institution to institution based on what III products you have, so you may end up needing to fork this project and update the models so they work with your own setup. It may also be worth running these tests after Sierra upgrades so that you can make sure there were no changes made to the database that break the models.

If using Docker, run only the database tests using the following:

./docker-compose.sh run --rm live-db-test

If not using Docker, you can use the below command, instead. If applicable, replace the value of the --ds option with whatever your DEV settings file is.

pytest --ds=sierra.settings.dev django/sierra/base/tests/test_database.py

Note: Some of these tests may fail simply because the models are generally more restrictive than the live Sierra database. We are forcing ForeignKey-type relationships on a lot of fields that don't seem to have actual database-enforced keys in Sierra. E.g., from what I can gather, id fields are usually proper keys, while code fields may not be — but code fields are frequently used in a foreign-key-like capacity. I think this leads to a lot of the invalid codes you have in Sierra, where you have a code in a record that should point to some entry in an administrative table (like a location), but it doesn't because the administrative table entry was deleted and the record was never updated. And there are other cases, as well. E.g., a code might use the string none instead of a null value, but there is no corresponding entry for none in the related table. Bib locations use the string multi to indicate that they have multiple locations, but there is no corresponding multi record in the location table. Etc.

Ultimately, even though these code relationships aren't database-enforced keys, we do still want the ORM to handle the relationships for us in the general case where you can match a code with the entry that describes it. Otherwise we'd have to do the matching manually, which would somewhat reduce the utility of the ORM.

Running Unit(ish) Tests

We also have decent coverage with unit (or unit-ish) tests. Although it is possible to run them outside of Docker if you're motivated enough, we recommend using Docker.

If you followed the Docker setup, you can run all available pytest tests with:

./docker-compose.sh run --rm test

If you didn't follow the Docker setup, then you should still be able to create a comparable test environment:

cd catalog-api/django/sierra
python -m manage.py migrate --settings=sierra.settings.test --database=default
python -m manage.py migrate --settings=sierra.settings.test --database=sierra

Spin up all of the needed test databases, and then run:

pytest

Testing Sierra Exports Manually

A good final test to make sure everything is working once you have things set up is to trigger a few record exports and make sure data shows up in the API.

License

See LICENSE.txt.

Contributors