Closed ghost closed 6 years ago
Hi afzalSH! Thanks for your suggestion. I see the circumstances where it can be helpful. The reason why it's heavy to implement is the fact that DynamoDbBackUp leverages S3 bucket versioning feature to support restoring table state to any given time in a past. This functionality is a core feature of DynamoDbBackUp existing from the very beginning. We deliberately took the decision to write as less code as possible ourselves and rely on AWS web services functionality.
@afzalSH The main goal was to achieve an incremental backup. That's why we use DynamoDb Streams which have events for each record that is modified in a DynamoDb table and S3 Versioning functionality to have possibility to restore any records any given time. If you want to make full backup you can use AWS DataPipeline
Hi, @afzalSH If you don't have any questions I'll close issue during the day.
So I see that the backup to s3 is stored as individual files for individual records. Is this something unavoidable? Doesn't look good!