geobeyond / Arpav-PPCV-backend

Backend di Piattaforma Proiezioni Climatiche per il Nord-Est.
Creative Commons Attribution 4.0 International
0 stars 1 forks source link

Backend - Piattaforma Proiezioni Climatiche per il Nord-Est

GitHub Actions Workflow Status

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 IT License.
Creative Commons License

Commissioned by & Data credits to
ARPAV

Current version was designed and developed in Italy by

A previous version had originally been developed by inkode


This repository contains source code for the backend components of the ARPAV-PPCV platform.

Its main goal is to serve climate-related data in the form of both historical observations and forecast models.

Briefly, the backend component consists of two main services:

  1. A web application that serves an OpenAPI API that is consumed by the frontend.
  2. A worker that is used to execute workflows outside the request/response cycle of a webapplication

The backend contains some additional services, which are used to support it and provide additional functionality, namely:

The main applications are launched by means of custom CLI commands. This CLI additionally provides a multitude of maintenance commands, such as upgrading the database schema, refreshing historical observations data, etc.

This is implemented in Python, using these main libraries and frameworks:

Installation

The primary means of installing the various backend components is by using docker compose. Use the compose.* files provided in the docker directory.

For example, for development:

docker compose -f docker/compose.yaml -f compose.dev.yaml up -d

Standing up the various components without docker is also possible, check out the compose file for how to do it. The main web application uses poetry, so installing it is just a matter of doing poetry install.

Configuration

This application is configured via environment variables. By defaul all settings are prefixed with ARPAV_PPCV__, but this can also be modified if needed. The system recognizes the following environment variables:

Operations

Accessing the CLI

The CLI is named arpav-ppcv. When running under docker compose, it can be used with the following incantation:

docker exec -ti arpav-ppcv-webapp-1 poetry run arpav-ppcv <sub-command>

There are numerous sub-commands and each may accept additional arguments, so please check the help of the sub-command you want to run, by passing the --help flag.

For example, running the web application server can be achieved with:

docker exec -ti poetry run arpav-ppcv run-server
Accessing the web API

When using the development docker compose file(s), the web application server is accessible at:

http://localhost:8877

The auto-generated API docs are accessible at the /api/v2/docs endpoint

Using the web admin

When using the development docker compose file(s), the admin section is available at:

http://localhost:8877/admin

Deployment

Development environment

dev environment is located at individual devs machine(s). In order to get a working dev deployment set up:

Building the docker image locally

Build the docker image by running this command:

docker build --tag ghcr.io/geobeyond/arpav-ppcv-backend/arpav-ppcv-backend

If you want to build an image for the current branch, such as when you added a new third-party dependency as part of an ongoing task, add the branch name to the build image:

docker build --tag ghcr.io/geobeyond/arpav-ppcv-backend/arpav-ppcv-backend:$(git branch --show-current)

In order to use this custom named image on your local development, set the CURRENT_GIT_BRANCH env variable before launching the docker compose stack, i.e.:

export CURRENT_GIT_BRANCH=$(git branch --show-current)
docker compose -f docker/compose.yaml -f docker/compose.dev.yaml up -d
Staging environment

Deployments to the staging environment are automated and happen whenever a new docker image is published to the project's container registry. This is governed by a two-stage workflow, orchestrated via github actions:

The strategy described above employs an installation of the webhook server, together with some custom deployment scripts.

Relevant places to look for configuration in the staging environment, in addition to the ${HOME} directory:

Production environment

Deployments to the production environment are automated. They are based on git tags and are governed by a two-stage workflow, orchestrated via github actions:

NOTES

Testing

The system has a set of automated tests which run whenever a new PR is submitted and also whenever a change is merged to the repository's main branch. This is triggered by means of a github actions workflow and uses (dagger)[https://dagger.io/] for the actual testing pipeline. Running the same pipeline locally can be achieved by:

Testing uses these main additional libraries/frameworks:

Git pre-commit

In order to ensure a speedier cycle between making a PR and having the changes reviewed and merged, you can install pre-commit and enable the configuration provided in this repo. This will ensure that commits will be suitably formatted and checked and that when they are pushed to the official repo they will be in a clean state.

Vulnerability scanning

There is a github actions workflow that runs daily and checks the code for known vulnerabilities. This uses trivy. The vulnerability scan can also be run locally by using the command:

    dagger run poetry run python tests/ci/main.py --with-security-scan