This project is aggregating the several components used for the HYFAA/MGB platform:
Clone this repository if you haven't already:
git clone --recurse-submodule https://github.com/OMP-IRD/hyfaa-mgb-platform.git
If you have already cloned it, you should update it:
git pull --recurse-submodules
docker-compose -f docker-compose.yml -f docker-compose-production.yml build
or if you're going to just use it for development
docker-compose build
Several docker-compose files are provided, that you can mix depending on the scenario
Run docker-compose up
.
The platform will start, but no data will be loaded or processed.
You first need to
Then start the composition:
docker-compose -f docker-compose.yml -f docker-compose-production.yml up
This will start the platform, and the scheduler for a first run.
If it is truly the first run, it will take a looong time (approx 1h), because the scheduler will initialize the model (and publish it, first time, in the DB). Then it stops, the data will be published in the DB and the web platform will be fully operational (this step is not yet implemented)
There is no built-in CRON task to re-run the HYFAA scheduler. You'll have to run it manually (docker-compose -f docker-compose.yml -f docker-compose-production.yml start scheduler
) or program yourself a CRON task on your machine, running the same command.
The scheduler is writing data in work_configurations. By default, it runs as root, which is then a mess to clean / manipulate.
In docker-compose, the user is configured using USER_ID and GROUP_ID environment variables. By default, those variables are not set. You can set them
USER_ID="$(id -u)" GROUP_ID="$(id -g)"; docker-compose ...
Note: In a CRON task, env. vars defined in your .bashrc file are not available. It's better to use this form.
After the scheduler has finished running, you'll want to publish the data into the database. This is done using a script from (for now) the hyfaa-backend container. You can do it with the following command:
docker-compose run backend python3 /hyfaa-backend/app/scripts/hyfaa_netcdf2DB.py /hyfaa-scheduler/data/
Note: you can just load the latest values by adding --only_last_n_days 20
at the end of this command.
To run the scheduler and publish the data, the recommended way is to create a cron task that chains the scheduler run and this publication command.
In [platform] dev mode, most likely, you won't want to run the scheduler, because it takes a lot of time and resources. You'd rather want to load into the DB some sample data. You can do this with
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
The pg_tileserv server is accessible on http://localhost/tiles
The interesting layers are:
Traefik, the proxy, has an SSL config ready, using Let's Encrypt (when developing, on localhost, it will use an auto-signed certificate, automatically).
By default, it uses the staging backend from LetsEncrypt, to avoid reaching the limitations over certificates issuance. This will result in security warnings about the certificate.
When going to production,
caServer
line in config/traefik.ymldocker-compose down
then docker-compose up -d
The traefik dashboard has been configured for a secured access. It is available on https, on the domains listed in the docker-compose The user/passwords are defined in the secrets.auth_users.txt file You can update the file's definition by running
htpasswd -nb -C 31 admin [yourpasswd]
and replacing the file's content by the output of this command.
Prometheus metrics are exposed on localhost:8000/metrics TODO: secure access ?