Research management platform for the Fragments of the Roman Republican Antiquarians grant.
High level design: https://wiki.ucl.ac.uk/display/IA/Republican+Antiquarian+HLD
Unless otherwise noted, everything in this repo is licensed under GPL3. Details are in LICENSE.txt.
production
holds production code installed in the production server.feature/<branch_name>
, and contain work in progress that will be merged onto the development
branch via approved pull requests.fix/<branch_name>
This project uses containers. Below are set up instructions and useful commands.
Create a working folder, e.g. ~/projects/
and cd
into it
Clone the repo
git clone git@github.com:UCL/frrant.git
git checkout development
mkvirtualenv -a ~/projects/frrant frrant
pip install -r src/requirements/local.txt
Docker needs to know the full path of your project folder (so it can look for changes)
/tmp
, /var
(Docker can use a lot of resources when watching folders for changes so it is best to keep this list to the absolute minimum)(Docker needs to restart for the changes to take effect).
More info is available online for your specific client, e.g. here for the Mac client: https://docs.docker.com/docker-for-mac/#file-sharing
cd ~/projects/frrant/src
Note that for running locally (on our own machines) we use the local.yml
config and not production.yml
which is for deployment to the production platform.
docker-compose -f local.yml build
to ensure everything is built from scratch, you can add --no-cache
docker-compose -f local.yml build --no-cache
docker-compose -f local.yml up
or as a background process:
docker-compose -f local.yml up -d
NB the output will be hidden when run in the background. To inspect the logs after the fact:
docker-compose -f local.yml logs -f
(the -f
will update the output as more log messages come in. Omit -f
to just see a snapshot).
localhost:8000
in your browser and you should see the project's home page.To restart the project, e.g. if some changes have been made to code that don't need the container to be rebuilt then use:
docker-compose -f local.yml down
docker-compose -f local.yml up
If any changes are made to the machine configuration, e.g. new pip packages installed, then the project will need to be rebuilt. Safest is to rebuild with no cache (see above) though this will take longer. Depending on the change it might be advisable to try rebuilding first without the --no-cache
option to save time.
docker-compose -f local.yml down
docker-compose -f local.yml build [--no-cache]
docker-compose -f local.yml up
User accounts are created by the superuser. For convenience, and on local development containers only a superuser account is automatically created when the container is built.
Log into the admin area localhost:8000/admin
using username: developer
password: developer
Create users in the standard Django admin.
Log out of the admin and navigate to localhost:8000
and follow the links to log in with the credentials you entered.
Commands (shell or Django commands etc) are run on the Docker container via docker-compose
using the local configuration.
Your container needs to be running while these commands are run.
docker-compose -f local.yml down
The -v
option will delete any volume data e.g. Postgres data
docker-compose -f local.yml down -v
docker-compose -f local.yml run django python manage.py shell
docker compose -f local.yml exec postgres psql -d rard -U ftTUlBrLLgvsMhazNCenkVazbscHpykq
e.g.:
(for example createsuperuser
)
docker-compose -f local.yml run django python manage.py <command> [arguments]
Note that by adding --rm
to the commands then the container in which the tests are run is deleted once the tests have finished.
docker-compose -f local.yml run --rm django pytest
docker-compose -f local.yml run --rm django coverage run -m pytest
NB to view coverage results,
docker-compose -f local.yml run --rm django coverage report
which prints a top-level summary to the console. Alternatively for more detail, run
docker-compose -f local.yml run --rm django coverage html
which creates a local folder htmlcov
. Browse to the file index.html
in that folder
to run just one file e.g.:
docker compose -f local.yml run --rm django coverage run -m pytest -v ./rard/research/tests/views/test_bibliography.py
or just run the failing test or test class (marked with @pytest.mark.failingtest):
docker compose -f local.yml run --rm django coverage run -m pytest -m failingtest
Git add .
pre-commit
if you get a red Fail message, run both lines again until there are no Fails
docker-compose -f local.yml run --rm django flake8
(no errors means check passed)
docker-compose -f local.yml run --rm django pycodestyle
(no errors means style check passed)
docker-compose -f local.yml run --rm django mypy rard
The command below will correct the order of your python imports to meet the standard
docker-compose -f local.yml run --rm django isort .
(import order errors when running the flake8 command should be fixed automatically by running the above command)
If you make changes to model definitions (i.e. in the models.py
files) then these need to be reflected in a migration file.
A migration file is a Python 'patch' file generated by Django to apply the changes you have made to the definition to the actual database to keep it up to date.
If you have made a change to a model and need to generate a migration file:
docker-compose -f local.yml run django python manage.py makemigrations
You will then either need to restart your container so that these changes are applied:
docker-compose -f local.yml down
docker-compose -f local.yml up
and these migrations are applied. Alternatively, to apply the latest migrations without restarting the container:
docker-compose -f local.yml run django python manage.py migrate
Sometimes migrations cannot be automatically generated, e.g. if two developers have both generated migration files and Django doesn't know which in which order they should be applied. If the two migrations are independent of one another (i.e. relate to different models) then Django can automatically merge these migration files for you by running:
docker-compose -f local.yml run django python manage.py makemigrations --merge
and following the instructions.
If you have made changes to your models and have not generated a migration file then Django will warn you in the console output something like:
Your models have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
NB In our case we would run the suggested manage.py
commands via the Docker container in the way shown above, i.e.
docker-compose -f local.yml run django python manage.py makemigrations
and
docker-compose -f local.yml run django python manage.py migrate
If your static files (css, js, fonts etc) have changed then on the production environments you will also need to refresh all of the static files in the deployed static folder.
docker-compose -f production.yml run django python manage.py collectstatic
Answer 'yes' to the 'are you sure question'. It might say something like '0 files updated' but this message is notoriously misleading. Check to see whether the static files (js etc) being used are the ones you expect.
To take a snapshot of e.g. the production database and install it locally we do the following:
Go to the production machine and run the following command:
sudo docker-compose -f production.yml run --rm django python manage.py dumpdata --indent=4 --natural-foreign --natural-primary -a > dump.json
This dumps all (-a) the data in the database into a json file that we can load locally. Check the end of the file to ensure there are no unexpected characters, particularly if you get an error similar to:
json.decoder.JSONDecodeError: Extra data: line 1722506 column 1 (char 53971086)
Now scp the json file to your local machine and run the following commands:
First, remove existing database:
docker-compose -f local.yml down -v
Rebuild database:
docker-compose -f local.yml up -d --build
Run the src/loaddata.sh
script to load this:
./loaddata.sh dump.json
Requirements are applied when the containers are built.
The requirements file for local development includes pre-commit. To set this up on your machine, make sure you're in the frrant directory and run:
pre-commit install
This will ensure the hooks specified in .pre-commit-config.yaml
are run prior to every commit. Black should automatically correct most formatting errors, so you can just try the commit again in most cases.
You can turn pre-commit off any time with pre-commit uninstall
if you need to.
If pre-commit is not working, such as causing version problems with black, you can bring black, isort, etc. up to date with
pre-commit autoupdate
GitHub Actions are currently setup to run linter and pytest jobs on pull-requests and pushes to the development branch. For the linter tests we run the pre-commit hooks, and for pytest we build the docker stack and run the full set of pytest tests.
To deploy we need to set some variables.
(example shown for development, replace with production folder on production)
Under the folder .envs
in the project folder (ls -la
shows folders starting with .
)
mkdir .development
cd .development
sudo vi .django
And enter the following:
USE_DOCKER=yes
IPYTHONDIR=/app/.ipython
DJANGO_SETTINGS_MODULE=config.settings.development
DJANGO_SECRET_KEY=<secret key>
DJANGO_ADMIN_URL=admin/
DJANGO_ALLOWED_HOSTS=<domain name>
URL_PREFIX=history/frrant-preprod/
In the above, the domain name will be e.g. frrd-dev.addev.ucl.ac.uk
for development. If more than one are required then they can be comma-separated (NB do not put inside braces []
)
Next
sudo vi .pgadmin
enter the following:
PGADMIN_DEFAULT_EMAIL=<username>
PGADMIN_DEFAULT_PASSWORD=<password>
for the username/password combo to use for pgadmin. This admin interface allows you to manage backups etc.
Finally
sudo vi .postgres
and enter the following
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=frrant
POSTGRES_USER=devuser
POSTGRES_PASSWORD=<secret password>
for the username/password to use for postgres database.
If your static files have changed in a deployment (css, js, fonts etc) then you need to run the 'collectstatic' command described earlier.
This development version will appear at the path
/history/frrant-preprod
. You can deploy it on a
development machine as well, and it will appear
at the path https://localhost/history/frrant-preprod/
Before building the nginx container we need to do the following for development (and production)
cd compose/development/nginx/certs
sudo openssl req -x509 -nodes -days 365 -config rardselfsigned.cnf -newkey rsa:2048 -keyout rardselfsigned.key -out rardselfsigned.crt
This will generated a certificate and corresponding key file.
If your deployment machine is light on space on /var
then it is possible to request that Docker builds containers elsewhere by modifying the following file:
The file in question is:
sudo vi /etc/docker/daemon.json
Set its content to
{
"graph": "/data/docker"
}
Then run:
sudo systemctl daemon-reload
sudo systemctl restart docker
Then restart your container. This example will build containers on the /data
directory instead of /var
To view running containers as a list:
sudo docker ps
to view logs for a particular container in the above list
sudo docker logs <containerid> -f
the -f
prints new lines as they are written. The <containerid>
is the same shown in the output of docker ps
Remove yum caches
sudo yum clean all
If your containers are built to /var
this will clean those up
sudo docker system prune
You need to ensure IP forwarding is enabled:
Edit or create the file /etc/sysctl.conf
add or change the following line:
net.ipv4.ip_forward = 1
Then apply the settings with e.g.
service network restart
If the Javascript and css is not looking how you expect after production, ensure you have run 'collectstatic' on the target machine(s). See the description given earlier.
During the software development stage, we work in sprint cycles that go from 2 to 4 weeks, depending on workload and availability. These have a planning meeting at the beginning, a meeting with the research team to demo all new changes, and a deployment to production if all changes have been approved by the research team during the meeting. We are currently using this Zenhub workspace to plan tasks in each sprint, and we share a GitHub priority board with the research team to better understand what tasks are essential for them, and which can wait.
Plans for next steps include running retrospectives in each internal sprint cycle, as well as making the pre-production/development server ready for the researchers so they can test and approve changes as we make them instead of waiting until the next meeting when we'll demo them live.