The cron job does backups locally currently based on the config. It only backs up the file locally.
So we need a way to automate moving those backups to a s3 backup system, but also from the s3 to a server.
Google storage is a good option. Running minio on google or Hertzner means running 3 of them for proper HA and so is not viable at this stage of the project.
When documenting it we should include setting up google storage also. The user is non technical remember,and they may want to set this up rather than we do it all for them.
Once we have the basic docs we can do the work and then go back to the docs see what steps can be optimsied for the user with some code changes.
Make the system back up everything, so that when you boot a new server, you can bring everything up as one atomic thing.
Make it possible to use the backup system as a Remote Deployment system.
If booty is on the server then it can be used to restore everything in one hit and thus allow mass deployment of servers
Feels like this is something for later when we have users have more servers. But if we design it right it will be easy to do later by just having booty on the server with a config telling it where the Remote Config is.
Maybe just use Nomad, Consul to do all this. No because nomad and consul needs 3 servers for HA, and with Google we get all this for free.
Where to code this?
Shared is where the code should be as it may later be needed by both booty and the server.
Server will then import shared and be integrated with the backup cron job.
Once a local backup occurs, the offsite backup can be kicked off.
Manually using the go-cloud stuff.
Example https://github.com/google/go-cloud/tree/master/samples/order, shows Google storage and Google Pub Sub being used together which is nice because when a backup happens a pubsub event will fire to tell the server process that a file changed. This helps avoid race conditions because one process when finished fires an event.
The cron job does backups locally currently based on the config. It only backs up the file locally.
So we need a way to automate moving those backups to a s3 backup system, but also from the s3 to a server. Google storage is a good option. Running minio on google or Hertzner means running 3 of them for proper HA and so is not viable at this stage of the project.
When documenting it we should include setting up google storage also. The user is non technical remember,and they may want to set this up rather than we do it all for them.
Once we have the basic docs we can do the work and then go back to the docs see what steps can be optimsied for the user with some code changes.
Make the system back up everything, so that when you boot a new server, you can bring everything up as one atomic thing.
Make it possible to use the backup system as a Remote Deployment system.
Where to code this?
Shared is where the code should be as it may later be needed by both booty and the server. Server will then import shared and be integrated with the backup cron job.
Options for how to do it ?
Rclone https://rclone.org/googlecloudstorage Freaking complex because its abstracted... Maybe just not worth the complexity....
Manually using the go-cloud stuff. Example https://github.com/google/go-cloud/tree/master/samples/order, shows Google storage and Google Pub Sub being used together which is nice because when a backup happens a pubsub event will fire to tell the server process that a file changed. This helps avoid race conditions because one process when finished fires an event.
https://github.com/creachadair/badgerstore and https://github.com/creachadair/gcsstore both implement the same interface from https://github.com/creachadair/ffs ! gcsstore uses https://github.com/google/go-cloud Might be some opportunities here for backup and restore in general but also for other things.