This repository contains different components needed to produce and maintain Hakai ERDDAP servers and associated datasets. Server update status:
All datasets made available within the datasets.d
folder in the ERDDAP XML format are made available on the production server.
Hakai deploy ERDDAP as docker containers by using the docker-erddap image. Continuous Integration is handled via the erddap-deploy actions and the container configuration is handled via CapRover applications.
See GitHub Deployments for all active deployments maintained via this repository.
ERDDAP is deployed on .env
file for configuration.
For local development, make a copy of sample.env
file as .env
.
Update the environment variables to match the deployed parameters.
Omit the email parameters and baseHttpsUrl and baseUrl.
Add test files if needed within the datasets/
directory.
Run docker-compose
docker-compose up -d
If successful, you should be able to access your local ERDDAP instance at http://localhost:{HOST_PORT}/erddap (default: http://localhost:8080/erddap)
This is a step by step procedure to generate a new dataset:
sh Dasds.sh
./datasets/
folder mounted within the container.sh DasDds.sh
development
branch. catalogue.hakai.org
server at /data/erddap_data
Hakai-erddap
is using the hakai-metadata-conversion
package to sync
periodically the different datasets metadata based on the latest changes
made within the Hakai Catalogue.
If any changes are available a PR to development should be automatically generated. Merge the changes to development, and follow a similar method as described within the production server section.
All views and tables generated from the different SQL queries made available in
the view
directory are run nightly from the hecate.hakai.org server from the
master branch with the bash script erddap_create_views.sh
ERDDAP relies on the different views and tables present within the erddap schema of the hakai database.
Some of those views are a union of mulitple tables hosted within sn_sa schema. We use the module update_erddap_views.py
to keep the different views in sync with all the associated tables. Use poetry to install the required package in pyproject.toml
and run the following command:
python update_erddap_views.py
Commit any changes made to the different files within views/*.sql
to the main branch.
All commits to this repository are tested by different linter through a PR or commit to the development and master branches:
We are using the super-linter library to generate the different automated integration tests.
If the linter tests and erddap_deploy tests pass, changes will automatically be reflected on the associated deployment via: