VMs are quite large files even after being .aar'd, so it's not uncommon to have vm publish either fail or get stale depending on the speed and stability of your internet connection.
As such, it would be nice for hostmgr vm publish to support resuming a failed upload.
Draft Implementation Idea
We should be able to use the ListMultipartUploads API to find if there's an existing upload for the given --bucket and --prefix of the file we're about to upload (pick the most recent if there are more than one).
Then if we find an existing upload matching the prefix:
Instead, use the ListParts API to find which parts have already been uploaded for that upload ID
Inside the file.uploadParts.parallelMap loop, compare the Size and ETag returned from the ListParts API response for that PartNumber with the local part's size and MD5 checksum; if they match, skip the call to self.uploadPart(…) and instead update the self.progress.completedUnitCount value directly to reflect that that part has already been uploaded.
Feature Suggestion
VMs are quite large files even after being
.aar
'd, so it's not uncommon to havevm publish
either fail or get stale depending on the speed and stability of your internet connection.As such, it would be nice for
hostmgr vm publish
to support resuming a failed upload.Draft Implementation Idea
We should be able to use the
ListMultipartUploads
API to find if there's an existing upload for the given--bucket
and--prefix
of the file we're about to upload (pick the most recent if there are more than one).Then if we find an existing upload matching the prefix:
AWSRequest.createMultipartUploadRequest
ListParts
API to find which parts have already been uploaded for thatupload ID
file.uploadParts.parallelMap
loop, compare theSize
andETag
returned from theListParts
API response for thatPartNumber
with the local part's size and MD5 checksum; if they match, skip the call toself.uploadPart(…)
and instead update theself.progress.completedUnitCount
value directly to reflect that that part has already been uploaded.