Open ramakrishna-g1 opened 1 week ago
Can you show how you are creating the S3 client and the TransferManager?
Are you setting custom values for targetThroughputInGbps
and minimumPartSizeInBytes
?
Can you show how you are creating the S3 client and the TransferManager? Are you setting custom values for
targetThroughputInGbps
andminimumPartSizeInBytes
?
below is our S3 clinet and transfermanager objects -
S3AsyncClient.crtBuilder() .region(Region.of(region)) .credentialsProvider(credentials) .minimumPartSizeInBytes(10 * MB) .httpConfiguration(s3CrtHttpConfiguration) .targetThroughputInGbps(50) .maxConcurrency(maxConcurrency) .build();
S3TransferManager.builder() .s3Client(s3AsyncClient) .executor(createExecutorService(executorThreadSize, currThreadName + "_slave")) .build();
Thank you for the sample codes.
Two notes:
Thank you for the sample codes.
Two notes:
- Can you check if the issue still persists when you use the latest version of the Java SDK? We've recently upgraded the AWS CRT version, which included some bugfixes.
- If the issue still persists, can you generate the CRT Trace logs at the moment of the error? Instructions can be found in our Developer Guide.
Thanks for your response, we have tried with aws-crt-client(2.26.15), now we are not seeing service crash, but still, we are experiencing failure while uploading large files, below is the log.
2024-07-04T10:28:40.603Z INFO 346400 --- [fileupload] [onPool-worker-1] c.r.c.f.services.UploadFileService : Upload initiated for file:/home/ubuntu/FileUploadTesting/10GB.txt 2024-07-04T10:30:00.718Z DEBUG 346400 --- [fileupload] [ Thread-3] software.amazon.awssdk.request : Received failed re sponse: 400, Request ID: R8FCR31ZJ4FE7ZKP, Extended Request ID: 5FIFXPJXLQ8Ahl3+ktxCiUXe6yMpeSmM8vraiiyc6skI6UB+5ifCyvg+54mXJ4wVeYLXQz o1G2E= 2024-07-04T10:30:00.795Z WARN 346400 --- [fileupload] [onPool-worker-1] s.a.a.t.s.p.LoggingTransferListener : Transfer failed.
software.amazon.awssdk.services.s3.model.S3Exception: Your proposed upload exceeds the maximum allowed size (Service: S3, Status Code: 400, Request ID: R8FCR31ZJ4FE7ZKP, Extended Request ID: 5FIFXPJXLQ8Ahl3+ktxCiUXe6yMpeSmM8vraiiyc6skI6UB+5ifCyvg+54mXJ4wVeYLXQzo1G2E=) at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicat edResponseHandler.java:156) ~[aws-xml-protocol-2.26.15.jar!/:na] at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedRes ponseHandler.java:108) ~[aws-xml-protocol-2.26.15.jar!/:na] at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHan dler.java:85) ~[aws-xml-protocol-2.26.15.jar!/:na] at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHan dler.java:43) ~[aws-xml-protocol-2.26.15.jar!/:na] at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$successTransformationResponseHandler$7(BaseClientHand ler.java:279) ~[sdk-core-2.26.15.jar!/:na] at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:92) ~[sdk-c ore-2.26.15.jar!/:na] at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1150) ~[na:na] at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[na:na] at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[na:na] at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:13 5) ~[sdk-core-2.26.15.jar!/:na] at software.amazon.awssdk.core.internal.metrics.BytesReadTrackingPublisher$BytesReadTracker.onComplete(BytesReadTrackingPublis her.java:74) ~[sdk-core-2.26.15.jar!/:na] at software.amazon.awssdk.utils.async.SimplePublisher.doProcessQueue(SimplePublisher.java:275) ~[utils-2.26.15.jar!/:na] at software.amazon.awssdk.utils.async.SimplePublisher.processEventQueue(SimplePublisher.java:224) ~[utils-2.26.15.jar!/:na] at software.amazon.awssdk.utils.async.SimplePublisher.complete(SimplePublisher.java:157) ~[utils-2.26.15.jar!/:na] at java.base/java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:819) ~[na:na] at java.base/java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:803) ~[na:na] at java.base/java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2195) ~[na:na] at software.amazon.awssdk.services.s3.internal.crt.S3CrtResponseHandlerAdapter.onErrorResponseComplete(S3CrtResponseHandlerAda pter.java:181) ~[s3-2.26.15.jar!/:na] at software.amazon.awssdk.services.s3.internal.crt.S3CrtResponseHandlerAdapter.handleError(S3CrtResponseHandlerAdapter.java:16 0) ~[s3-2.26.15.jar!/:na] at software.amazon.awssdk.services.s3.internal.crt.S3CrtResponseHandlerAdapter.onFinished(S3CrtResponseHandlerAdapter.java:129 ) ~[s3-2.26.15.jar!/:na] at software.amazon.awssdk.crt.s3.S3MetaRequestResponseHandlerNativeAdapter.onFinished(S3MetaRequestResponseHandlerNativeAdapte r.java:25) ~[aws-crt-0.29.25.jar!/:0.29.25]
Hi Debora, we also have another observation while we are testing the upload using sdk2 vs sdk1, with sdk1 it took 14 min and with sdk2 it took 13 min time to upload 90 GB of data, Our S2Async client & Transfer Manager configuration is like mentioned above, please share your thoughts on this.
software.amazon.awssdk.services.s3.model.S3Exception: Your proposed upload exceeds the maximum allowed size (Service: S3, Status Code: 400, ...
Okay, so now you are hitting the service size limit for uploads, this error message is being sent by S3. What's the size of the file /home/ubuntu/FileUploadTesting/10GB.txt
, is it really 10GB? Maybe it is being uploaded as a single part, which have a 5GB size limit according to the S3 docs.
Can you provide the CRT Trace logs with the error?
Describe the bug
service is crashed while uploading large files (more than 5/10 GB) to S3 using aws sdk2(S3AsyncClient.crtBuilder).
Expected Behavior
Service / application should not crash while uploading large files, should be allowed to upload any file sizes irrespective of limit.
Current Behavior
Service / application is crashed.
Fatal error condition occurred in /codebuild/output/src2479913021/src/aws-crt-java/crt/aws-c-s3/source/s3_buffer_pool.c:243: size <= buffer_pool->mem_limit Exiting Application
Reproduction Steps
Use the below code to upload large files.
UploadFileRequest uploadFileRequest = UploadFileRequest.builder() .putObjectRequest(putRequest -> { putRequest.bucket("bucketName") .key("s3FilePath") .build(); }) .source(file) .build();
Possible Solution
Service / application should not crash.
Additional Information/Context
we have below maven dependencies -
AWS Java SDK version used
2.25.65
JDK version used
17
Operating System and version
Ubuntu 20.04.6 LTS