Closed santoshkatageri closed 3 years ago
Which instance type are you using?
Thanks for the response @thenickdude
Instance type is c3.4xlarge
AMI - baked from Amazon Linux 2
I can see that this issue popping up whenever I run multiple snap-to-s3 jobs and the new job was unable to figure out the next mount point exactly like /dev/sdh, /dev/sdi, /dev/sdj, etc.
I couldn't reproduce this, can you let me know how many simultaneous snap-to-s3 jobs you are running and if there is any delay between launches of snap-to-s3?
It's possible that the jobs are either exhausting the total mountpoint pool (it only attempts to use letters f-z), or that two jobs attempt to claim the same mountpoint at the same moment and only one of them wins. If I know which is the case I can add retry logic in here to fix it.
I am currently running 4 snap-to-s3 jobs simultaneously. I am getting this error when I run next snap-to-s3 job within a short time after the previous one.
It will not through any error if I run it again.
I've now pushed a new version 0.4.3 which should fix this issue (by adding a retry loop to the attachment procedure), please try it out.
Thanks for the new change you pushed. This solved the issue.
I assume it's waiting for the previous volume to be removed completely.
Not sure if it's caused by the snapshot I am trying to copy. I am getting the following error.
`> { Error: snap-xxxx: InvalidParameterValue: Invalid value '/dev/sdi' for unixDevice. Attachment point /dev/sdi is already in use
It would be very helpful if you can tell if I need to make any changes on the AWS side as well.