aws / aws-sdk-js-v3

Modularized AWS SDK for JavaScript.
Apache License 2.0
2.95k stars 554 forks source link

Error in uploading: Failed to fetch #6203

Open huangyongfa opened 1 week ago

huangyongfa commented 1 week ago

Checkboxes for prior research

Describe the bug

I use @aws-sdk/client-s3 + @smithy/fetch-http-handler to upload a large 1.8G file, and realize the slice upload and resumable upload functions,

I refer to these articles:

https://github.com/aws/aws-sdk-js-v3/issues/5334 https://stackoverflow.com/questions/77229817/failed-to-fetch-aws

The aws-sdk version has been modified, but this error still appears in the upload: Failed to fetch。

I did the sessionToken timeout to re-obtain the logic, when uploading to a shard in the middle, it appeared that the shard could not be connected, can you help analyze what is going on?

SDK version number

@aws-sdk/client-s3@3.427.0

Which JavaScript Runtime is this issue in?

Node.js

Details of the browser/Node.js/ReactNative version

v16.14.2

Reproduction Steps

Here's my demo code

<template>
  <div>
    <input type="file" @change="handleBeforeUpload" />
    <div v-if="uploading">
      <p>Uploading: {{ uploadPercent }}% - {{ progressUpSize | formatSize }} / {{ progressTotalSize | formatSize }}</p>
    </div>
    <button @click="handleCustomRequest(111)" style="margin-right: 30px;">Submit</button>
    <button @click="handleAbortUpload">AbortUpload</button>
  </div>
</template>

<script>
import { S3Client, CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand, AbortMultipartUploadCommand } from '@aws-sdk/client-s3'
import { getUploadToken } from '@/api/common'
import { FetchHttpHandler } from '@smithy/fetch-http-handler'
import SparkMD5 from 'spark-md5'
export default {
  data () {
    return {
      bucket: '', 
      key: '', 
      s3: null,
      uploadId: '',
      fileList: [],
      parts: [],
      awsdk: '',
      uploading: false,
      uploadPercent: 0,
      uploadMd5: '',
      uploadNum: '',
      uploadCancel: false,
      progressUpSize: 0,
      progressTotalSize: 0
    }
  },
  filters: {
    formatSize (value) {
      if (value === 0 || value === undefined) {
        return '0'
      } else if (value < 1024 * 1024) {
      return (value / 1024).toFixed(0) + ' KB'
    } else {
      return (value / 1024 / 1024).toFixed(2) + ' MB'
    }
    }
  },
  methods: {
    handleSessionToken () {
      return new Promise((resolve, reject) => {
        getUploadToken({ businessType: 'upgradePackage' }).then((res) => {
          this.awsdk = res
          resolve(res)
        })
      })
    },
    handleBeforeUpload (event) {
      this.fileList = []
      this.fileList.push(event.target.files[0])
      this.progressTotalSize = this.fileList[0].size
    },
    async handleCustomRequest (uploadNum) {
      if (!this.fileList.length) {
          alert('Please select the file first')
          return
        }
        if (localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`) && JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).uploadMd5) {
        this.uploadMd5 = JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).uploadMd5
        this.uploadNum = uploadNum
        this.uploading = true
        this.uploadPercent = JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).uploadPercent || 0
        this.progressUpSize = JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).progressUpSize || 0
        this.uploadCancel = false
        this.loadUploadState()
        }
      const reader = new FileReader()
      reader.onload = (event) => {
        const fileContent = event.target.result
        const spark = new SparkMD5.ArrayBuffer()
        spark.append(fileContent)
        this.uploadMd5 = spark.end()
        this.uploadNum = uploadNum
        this.uploading = true
        this.uploadPercent = 0
        this.uploadCancel = false
        this.loadUploadState()
      }
      reader.readAsArrayBuffer(this.fileList[0])
    },
    // Start uploading or resuming a resumable upload
    async upload () {
      this.awsdk = await this.handleSessionToken()
      this.key = localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`) ? JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).uploadKey : this.awsdk.pathPrefix + `${new Date().getTime()}.zip`
      this.progressUpSize = localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`) ? JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).progressUpSize : 0
      this.uploadPercent = localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`) ? JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).uploadPercent : 0
      this.bucket = this.awsdk.bucketName
      this.s3 = new S3Client({
        region: 'ap-southeast-1',
        credentials: {
          accessKeyId: this.awsdk.accessKeyId,
          secretAccessKey: this.awsdk.accessKeySecret,
          sessionToken: this.awsdk.securityToken
        },
        httpHandler: new FetchHttpHandler({
          fetchOptions: {
            keepAlive: true,
            requestTimeout: 60000
          }
        })
      })
      try {
        const { UploadId } = await this.createMultipartUpload()
        this.uploadId = localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`) ? JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).uploadId : UploadId
        const chunks = await this.generateChunks(this.fileList[0])
        for (let i = 0; i < chunks.length; i++) {
          if (this.isPartUploaded(i + 1)) {
            continue // If the shard has already been uploaded, skip it
          }
          const { ETag } = await this.uploadPart(chunks[i], i + 1)
          this.savePart(i + 1, ETag)
        }
        await this.completeMultipartUpload()
      } catch (error) {
        this.uploading = false
      }
    },
    // Create a multipart upload
    async createMultipartUpload () {
      const command = new CreateMultipartUploadCommand({
        Bucket: this.bucket,
        Key: this.key
      })
      try {
        const response = await this.s3.send(command)
        return response
     } catch (error) {
        console.log( error)
     }
    },
    // Multipart upload
    async uploadPart (body, partNumber) {
      const command = new UploadPartCommand({
        Bucket: this.bucket,
        Key: this.key,
        PartNumber: partNumber,
        UploadId: this.uploadId,
        Body: body
      })
      try {
        const response = await this.s3.send(command)
        this.progressUpSize += body.size
        this.uploadPercent = ((this.progressUpSize / this.progressTotalSize) * 100).toFixed(2)
        return response
     } catch (error) {
        console.log(error)
        if (!this.uploadCancel) {
          this.loadUploadState()  // Retry the resumption
        }
     }
    },
    // Merge all shards
    async completeMultipartUpload () {
      this.parts = localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`) ? JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).uploadState.parts : []
      const command = new CompleteMultipartUploadCommand({
        Bucket: this.bucket,
        Key: this.key,
        UploadId: this.uploadId,
        MultipartUpload: {
          Parts: this.parts
        }
      })
      try {
        const response = await this.s3.send(command)
        if (response.ETag) {
        localStorage.removeItem(`upload_${this.uploadMd5}_${this.uploadNum}`) // 上传成功了
        this.uploading = false
        this.uploadPercent = 0
        this.progressUpSize = 0
        }
        return response
     } catch (error) {
        console.log( error)
     }
    },
    // Cancel the upload
    async handleAbortUpload () {
      const command = new AbortMultipartUploadCommand({
        Bucket: this.bucket,
        Key: this.key,
        UploadId: this.uploadId
      })
      const abort = await this.s3.send(command)
      if (abort) {
         this.uploading = false
         this.uploadCancel = true
         localStorage.removeItem(`upload_${this.uploadMd5}_${this.uploadNum}`)
         this.progressUpSize = 0
         this.uploadPercent = 0
      }
    },
    // Shard the file
    generateChunks (file) {
      const chunkSize = 5 * 1024 * 1024 // 5MB
      const chunks = []
      for (let i = 0; i < file.size; i += chunkSize) {
        chunks.push(file.slice(i, i + chunkSize))
      }
      return chunks
    },
    // Check whether the shard has been uploaded
    isPartUploaded (partNumber) {
      return this.getUploadState().parts.some(p => p.PartNumber === partNumber)
    },
    // Save the multipart upload status
    savePart (partNumber, ETag) {
      const state = this.getUploadState()
      const part = { PartNumber: partNumber, ETag }
      const index = state.parts.findIndex(p => p.PartNumber === partNumber)
      if (index > -1) {
        state.parts[index] = part
      } else {
        state.parts.push(part)
      }
      this.saveUploadState(state)
    },
    loadUploadState () {
      const state = this.getUploadState()
      if (state.parts.length > 0) {
        this.parts = state.parts
      }
      this.upload()
    },
    // Get the upload status
    getUploadState () {
      const state = localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`) ? JSON.parse(localStorage.getItem(`upload_${this.uploadMd5}_${this.uploadNum}`)).uploadState : { parts: [] }
      return state
    },
    // Save the upload status
    saveUploadState (state) {
      const store = {
        progressUpSize: this.progressUpSize,
        uploadPercent: this.uploadPercent,
        uploadKey: this.key,
        uploadId: this.uploadId,
        uploadMd5: this.uploadMd5,
        uploadState: state
      }
      localStorage.setItem(`upload_${this.uploadMd5}_${this.uploadNum}`, JSON.stringify(store))
    }
  }
}
</script>

Observed Behavior

1146

as shown in the picture when it was uploaded to the 8th shard, it could not be connected after trying to reconnect, and then it could not be connected when it was 17 shards

Expected Behavior

It just thawed error.

Possible Solution

It should work perfectly!

Additional Information/Context

No response

huangyongfa commented 1 week ago
import { S3Client, CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand, AbortMultipartUploadCommand } from '@aws-sdk/client-s3'
import { FetchHttpHandler } from '@smithy/fetch-http-handler'
this.s3 = new S3Client({
        region: 'ap-southeast-1',
        credentials: {
          accessKeyId: this.awsdk.accessKeyId,
          secretAccessKey: this.awsdk.accessKeySecret,
          sessionToken: this.awsdk.securityToken
        },
       httpHandler: new FetchHttpHandler({
       keepAlive: true,
       requestTimeout: 60000
      })
      })

Is there an error in the writing of htpHandler? Causing keepAlive to not take effect?

this.s3 = new S3Client({
       .......
      requestHandler: new FetchHttpHandler({
     keepAlive: true,
    requestTimeout: 6000
      })
      })

The use of requestHandler in S3Client will cause the call to the AWS method, CreatMultipartUploadCommand, to fail

aBurmeseDev commented 2 days ago

Hi @huangyongfa - apologies for delay.

I wasn't able to reproduce with version @aws-sdk/client-s3@3.427.0. Have you tried the workaround suggested in the issue you mentioned?

To resolve this issue, explicitly set keepAlive=false as in my previous example or ensure that your application has @smithy/fetch-http-handler@2.2.3 installed.

You may need to update your lockfile for this, since fetch-http-handler is a transitive dependency of the SDK. You do not need to explicitly install it in your package.json.

import { FetchHttpHandler } from "@smithy/fetch-http-handler";
import { S3Client } from "@aws-sdk/client-s3";

new S3Client({
  requestHandler: new FetchHttpHandler({ keepAlive: false })
});

Please try to explicitly set keepAlive: false, make sure latest version of @smithy/fetch-http-handler installed.