Ansible script for the backup server
Deploy scripts for the backup server
The service would act as a free version of Pingdom, it calls the gateway health check endpoint, hosted on the backup server and will email the sys admin if anything is down. Email them if it comes up again.
[ ] The application should list the backups that exist on the backup server where the application is hosted, filtered by date
Create some test files on another server which you can SSH into. The files should be named like this:
Create a shell script in the scheduler microservice which will SSH into the server and check that the files exist, then prepare a JSON payload like this:
Set up a database in the healthcheck service
Create a route to receive the JSON payload and store in the database
Create UI to query the database and show the existence of the files historically to the user
[ ] BLOCKED: euan to research .... Elasticsearch backs up using a snapshot. So the SSH command needs to check the existence of an elasticsearch snapshot
[ ] Create a deploy script for the backup server, install docker, firewall, provision the backup folders with the same permissions that are used in emergency backup
[ ] The application should show available disk space on the backup server where the application is hosted
[ ] Deploy the Next.js application to the backup server with environment variables for: SMTP service, URL to the healthcheck API & Manager IP.
[ ] Create an Ansible script for the backup server
Not deployed with the same swarm
Ansible script for the backup server Deploy scripts for the backup server
The service would act as a free version of Pingdom, it calls the gateway health check endpoint, hosted on the backup server and will email the sys admin if anything is down. Email them if it comes up again.
Create some test files on another server which you can SSH into. The files should be named like this:
$ROOT_PATH/backups/minio/${$BACKUP_DATE} - directory exists $ROOT_PATH/backups/metabase/${$BACKUP_DATE} - directory exists $ROOT_PATH/backups/influxdb/${$BACKUP_DATE} - directory exists $ROOT_PATH/backups/mongo/hearth-dev-${$BACKUP_DATE}.gz - file exists $ROOT_PATH/backups/mongo/user-mgnt-${$BACKUP_DATE}.gz - file exists $ROOT_PATH/backups/mongo/openhim-dev-${$BACKUP_DATE}.gz - file exists $ROOT_PATH/backups/mongo/application-config-${$BACKUP_DATE}.gz - file exists $ROOT_PATH/backups/mongo/metrics-${$BACKUP_DATE}.gz - file exists $ROOT_PATH/backups/mongo/webhooks-${$BACKUP_DATE}.gz - file exists $ROOT_PATH/backups/mongo/performance-${$BACKUP_DATE}.gz - file exists
Create a shell script in the scheduler microservice which will SSH into the server and check that the files exist, then prepare a JSON payload like this:
[{ "file": "influx", "exists": true, "date": "2022-08-11" }]
Set up a database in the healthcheck service Create a route to receive the JSON payload and store in the database Create UI to query the database and show the existence of the files historically to the user