Open tdudgeon opened 3 years ago
Direct backup to Echo S3 is not possible. Only AWS S3 is currently supported. So we need to use local storage for backups. However, with local storage, the automated backups are happening but the resulting file is not getting pushed to the expected place, making it more difficult to extract to Echo S3. This problem has been acknowledged by Bitnami: https://github.com/bitnami/bitnami-docker-discourse/issues/185 We wait for them to address the problem and continue to do manual backups in the meantime.
The underlying backup problem is fixed in bitnami/discourse:2.6.7-debian-10-r2
and the DEV cluster's discourse
is now using this (via our docker.io/informaticsmatters/discourse:1.0.2
image).
In order to limit the disruption to the current deployment (which uses cinder volumes for the Discourse data and its backups) we plan to add a 3rd container to the discourse Pod - one that runs 'rclone' and 'cron' - that will copy/sync the backups folder to Echo using an rclone config mapped into the container.
A cron-based rclone container image has been developed (https://github.com/InformaticsMatters/crone). It is controlled entirely from environment variables and can synchronise a locally mounted directory with a remote S3 service. For now this is deployed alongside the existing discourse containers in the discourse Pod of the development deployment.
It essentially mounts the same shared volume and synchronises at 02:03 every day.
If this operates successfully we can upgrade the production discourse and apply the same mehcanism.
The production discourse has been updated. Backups should be available on the stfc/echo path discourse-backups/production
tomorrow...
Discourse can only use AWS S3 so we need to resort to backing up to disk and copying those backups to Echo S3