Open Cluster2a opened 4 months ago
That is a good question. We currently have pre/post-backup and pre/post-restore. Those are under the assumptions that you might need to do things like snapshot a database, which requires pre- and post-backup actions; or add data to the backup file before sending it to its target.
You are looking for something that happens after sending the file to its target (e.g. upload to S3). The only thing that runs right now is the prune, if enabled, which is built into to the timer loop.
Thinking about this, I don't see anything inherently wrong with it, per se. It kind of fits with a general, "backup succeeded or failed, either way, tell me somehow so I can execute something." You already can do that with log parsing, as well as via remote telemetry, but it is not quite the same as having an explicit action for "succeeded" or "failed" (but either way, complete).
I don't object to this per se, if we can make it work well.
What are your thoughts on the configuration of it? Most people use just a single target, but it does support multiple targets for a single backup. Trigger once per target?
@deitch , we use one target for the backup. In our case, the main purpose is to send a ping to a service so the service knows that the backup was completed successfully.
In case of an error (missing ping) we would get a notification by betterstack.
So in our specific use case, the target amount would not matter, as we don't need a notification by target.
Looking at the current implementations, I would expect something like this:
--pre-upload-scripts
& --post-upload-scripts
At least for our use case, this would be a good solution.
the target amount would not matter, as we don't need a notification by target
It still would have to work for all use cases, the simple "single target" one and the "multiple target".
--pre-upload-scripts / --post-upload-scripts
I think that would work. Do you have enough go experience to offer a PR?
I think that would work. Do you have enough go experience to offer a PR?
I am not working with go, I am affraid I can't help out with a PR. Sorry :(
Hang on a moment. Why can you not use --post-backup-scripts
for this? These get executed after a dump but before sending to a target.
@deitch, that is what we are currently doing. But if the upload fails, we will not know. It would be better to have a --post-upload-scrips
that runs after a successful upload. So we would send the hearbeat after a successful backup and upload.
We are using a heartbeat command (curl request) for uptime.betterstack.com to keep track of the jobs.
I would like to run the heartbeat after a successful S3 upload.
Currently, I am mounting the script in /scripts.d/post-backup/heartbeat.sh, which runs before the upload.
Is there any way to inject script after the upload?