Open mg98 opened 3 years ago
An even better option would be to export to memory (string should be sufficient, looking at the code). The user of the library can then decide where to send the backup to (file, S3, database, etc).
For @mg98 each AWS Lambda container has a temp folder which can be used to write the data to and read the data again to then send it to S3.
Each Lambda function receives 500MB of non-persistent disk space in its own /tmp directory. https://aws.amazon.com/lambda/faqs/
An even better option would be to export to memory
Yeah, I think that would be useful. Although, when looking at scalability, a good implementation of a S3 export could allow exports of data sets that, because of their size, exceed memory or disk limitations of the execution environment.
This would be nice. It's also pretty easy to chain the two together using the aws
cli:
set -e
backup_name=$(date "+%Y-%m-%d")
cognito-backup-restore backup --pool all --use-env-vars --directory "$backup_name"
aws s3 cp "$backup_name" "s3://YOUR-BUCKET/$backup_name" --recursive
I may not work due to https://github.com/rahulpsd18/cognito-backup-restore/issues/39 . Something to check.
I don't know about the others, but I would like to use this backup solution with a scheduled lambda function. It would be sweet to have the possibility to export to a S3 bucket. And then it would also make sense to allow restoring from an export in a S3 bucket.