Closed NirajVisana closed 3 years ago
What do you mean about local AWS network? The traffic already doesn't leave AWS and the speed is limited by EC2's snapshot restore speed, which we can't improve. You can run this on an instance in a private VPC subnet and add an S3 endpoint to the VPC to completely isolate it from the Internet if you like.
You're going to have to find and tag snapshots yourself using the AWS CLI or similar.
You can run it using nohup and pipe its output to a log file to make it continue to run even if the SSH session drops. Or launch it from cron or as a systemd service.
Thanks, It helped
Tried with nohup but it doesn't work. the script runs till calculating size of data to upload then nothing, I waited overnight but still nothing. any thoughts?
Thanks
Try running it like:
nohup snap-to-s3 ... 2>&1 >> snap-to-s3.log
This ensures that the process doesn't die because it's unable to send output to the terminal.
how can I run the command in the background, I want to tag all snapshots and just want to run the command and move all of them because it fails when running on a command line with a broken pipeline().
also is there any way I can transfer this through a local aws network that way it will be fast and secure.
Thanks Niraj