aws / aws-sdk-java

The official AWS SDK for Java 1.x. The AWS SDK for Java 2.x is available here: https://github.com/aws/aws-sdk-java-v2/
https://aws.amazon.com/sdkforjava
Apache License 2.0
4.11k stars 2.83k forks source link

Mutipart upload from InputStream (without supplying content-length) uses putObject(). #474

Closed selaamobee closed 2 months ago

selaamobee commented 9 years ago

I am calling com.amazonaws.services.s3.transfer.TransferManager.upload(String, String, InputStream, ObjectMetadata) and expect it to upload the stream in chunks but the API tried to upload it in one chunk.

After digging in the code a bit what I discovered this is the problematic call order: com.amazonaws.services.s3.transfer.internal.UploadCallable.call() com.amazonaws.services.s3.transfer.internal.UploadCallable.isMultipartUpload() com.amazonaws.services.s3.transfer.internal.TransferManagerUtils.shouldUseMultipartUpload(PutObjectRequest, TransferManagerConfiguration) com.amazonaws.services.s3.transfer.internal.TransferManagerUtils.getContentLength(PutObjectRequest)

The getContentLength returns -1 if the input is a stream (and content-length wasn't supplied). The shouldUseMultipartUpload returns true if contentLength > configuration.getMultipartUploadThreshold() and since -1 is not larger than that it doesn't use multi-part upload (and for me later fails because my stream is too big to buffer by the API).

david-at-aws commented 9 years ago

This is the expected behavior if you don't provide a content-length, at least for the moment. If you do know the content length, providing it up front will allow the TransferManager to be significantly more efficient.

S3 requires a content-length header to be specified on each putObject/uploadPart request - if the length isn't known up front (and the stream doesn't support mark/reset), the SDK attempts to buffer the stream into memory to find out how large it is. This obviously only works for relatively small streams - for anything large enough that it can't be buffered in memory, you need to provide a content-length up front. The TransferManager assumes that if you haven't provided a content-length, the stream is small enough that doing a single-part upload will be more efficient than a multi-part upload.

Theoretically we could add a configuration option to force a multi-part upload even when the content-length is unknown, using the configured minimum part size and hoping for the best? You'd still have to buffer 5MB at a time to calculate the content-length for each individual part (more if you want to go over ~50GB total, since S3 allows a maximum of 10K parts per MPU), but at least you wouldn't have to buffer the entire thing if you legitimately don't know how large it's going to be.

I'll add this to our backlog (please chime in here if this would be useful to you so we can prioritize appropriately), or I'd be happy to take a look at a pull request.

selaamobee commented 9 years ago

Hello David and thank you for the quick and detailed response.

I am trying to write to S3 from a ZipInputStream and the ZipEntry.getSize() may return -1.

Buffering the file in memory is not an option since it is quite large and I get an OutOfMemoryException.

Writing it to disk is probably what I do but I would have preferred not to do that.

Seems to me like a valid and common use case.

david-at-aws commented 9 years ago

Yeah, buffering to disk is the best option in the short term. On the plus side, the TransferManager can upload multiple parts in parallel when you give it a file, as opposed to having to read/upload from the ZipInputStream serially - may even end up being faster.

alexmojaki commented 8 years ago

I've written a library that automatically chunks data from a stream into parts so that you can avoid both running out of memory and writing to disk: https://github.com/alexmojaki/s3-stream-upload

selaamobee commented 8 years ago

Thank you Alex, looks nice. My company probably won't give me time to return to this feature now and fix it, but if it will happen I'll try and use your library.

mikaelnousiainen commented 8 years ago

TransferManager definitely needs this feature. The size of the stream to be uploaded is often unknown, for example when the data is being generated by or queried from an external service/database.

shokeenAnil commented 7 years ago

@david-at-aws David, is there any ETA for this change?

jhaber commented 7 years ago

For anyone running into this issue while using AmazonS3#putObject(PutObjectRequest), we ended up overriding this method so that if it encounters an InputStream with unknown length, it will read a fixed amount of data into a buffer (1MB, for example). If it reaches the end of the stream before filling the buffer, then we can just upload the buffer. If we fill the buffer before exhausting the stream, then we write to a temporary file and upload that instead. The code is here if in case anyone wants to do something similar. (and once TransferManager uses NIO we'll probably switch to that)

ronreiter commented 6 years ago

@david-at-aws perhaps you could reconsider implementing support for unknown large file sizes using the transfermanager? it's an important feature.

eocron commented 5 years ago

Is there any progress in this direction? There is a lot of streams in a world which performs operations not knowing final length: some zip imlementations, serializations, encryptions, transfers, etc. This particular requirement not allow to make simple transition of stream between endpoints without buffering it somewhere.

This approach acceptable only for relatively small files, which can be sent like byte array to lower RPC even without using Streams. For streams this is unaccepatable.

jcogilvie commented 5 years ago

+1 for buffering to a temp file (or a byte array buffer) up to the multipart threshold. Not all streams have a length known up front; in fact, this is part of the reason to use a stream and not just a file, so it's important to account for this.

svobol13 commented 5 years ago

For our use case this huge dealbreaker. Within single execution of our task we generate couple of files each of size couple of hundreds MB. Since Lambda would be best fit for that thanks to this missing feature it is not the case. We cannot buffer (size wise) neither to disk nor to memory. We can either switch language (node supports streaming to S3 but the task runs much slower) or switch computation service (EC2 supports larger disk for buffering or huge memory but is more expensive and it is more work to operate it) or switch cloud provider (the other one supports atleast getting OutputStream in Java). I am deeply disappointed. Didn't expect such important feature will be missing.

alexmojaki commented 5 years ago

I hope this isn't considered spammy, but I'm gonna mention my library again (https://github.com/alexmojaki/s3-stream-upload) because (1) I think @svobol13 would find it very useful and (2) based on his comment, I think he and possibly others are not seeing my first comment lost in the middle of this thread.

BramDeCneudt commented 5 years ago

Is there any progress in this? I'll be using the third-party library that @alexmojaki suggested but this seems like an important feature missing in the official AWS library.

prameshbhattarai commented 3 years ago

I was using AWS MutlipartUpload (low-level API) for uploading InputStream without Content-Length, here is a link to my code snippet -> https://gist.github.com/prameshbhattarai/dbb19eb2518ab0554e6aeb36b92ee84a#file-s3multipartupload-java https://medium.com/@pra4mesh/uploading-inputstream-to-aws-s3-using-multipart-upload-java-add81b57964e

fkoner commented 3 years ago

@alexmojaki

¿How could I use your library? I have a ByteArrayInputStream (or something similar).

ObjectMapper objectMapper = new ObjectMapper();

Map<String, Object> obj = objectMapper.readValue(is, HashMap.class);

// Execute some transformation tasks

ByteArrayOutputStream baos = new ByteArrayOutputStream();
objectMapper.writeValue(baos, obj);
ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray());

// How to invoke StreamTransferManager ???
fkoner commented 3 years ago

@alexmojaki

Finally I got It working

      int numStreams = 4;
      final StreamTransferManager manager = new StreamTransferManager(bucketName, keyNameJSON, s3Client)
        .numStreams(numStreams)
        .numUploadThreads(numStreams)
        .queueCapacity(numStreams)
        .partSize(5);

      manager.customiseUploadPartRequest(new UploadPartRequest().withObjectMetadata(metadata));

      final List<MultiPartOutputStream> streams = manager.getMultiPartOutputStreams();

      final int UPLOAD_PART_SIZE = 1 * Constants.MB;

      ExecutorService pool = Executors.newFixedThreadPool(numStreams);
      for (int i = 0; i < numStreams; i++) {
        final int streamIndex = i;
        pool.submit(new Runnable() {
          public void run() {
            try {
              MultiPartOutputStream outputStream = streams.get(streamIndex);
              int nRead = 0;
              byte[] data = new byte[UPLOAD_PART_SIZE];
              while ((nRead = zipBais.read(data, 0, data.length)) != -1) {
                outputStream.write(data, 0, nRead);
              }

              // The stream must be closed once all the data has been written
              outputStream.close();
            } catch (Exception e) {
              e.printStackTrace();
              // Aborts all uploads
              manager.abort(e);
            }
          }
        });
      }

      pool.shutdown();

      boolean wait = true;

      while (wait) {
        try {
          wait = !pool.awaitTermination(2, TimeUnit.SECONDS);
          log.info("wait, {}", wait);
          if (wait) {
            log.info("Awaiting completion of bulk callback threads.");
          }
        } catch (InterruptedException e) {
          log.info("Interruped while awaiting completion of callback threads - trying again...");
        }
      }

      // Finishing off
      manager.complete();

Thanks in Advance Paco

unsavorypeople commented 3 years ago

Another vote here for this ability to be built into the aws SDK. This is a huge hole with no good work arounds.

@alexmojaki thank you for your library, but we are hesitant to use this in a production system because we won't be able to get support for it if aws makes a breaking change in the SDK. Also, it looks like your library requires the V1 SDK, and we are already on V2.

Quite frankly, I'm a bit surprised that aws has not provided a solution to this basic functionality in the 5+ years this issue has been open.

sdonovanuk commented 3 years ago

C'mon AWS, get it done. The ability to upload from a stream of unknown length is especially useful, especially if you're trying to, for example, (a) stream the result of a database bulk read to S3, or (b) streaming a database backup to S3, etc.

rpiccoli commented 3 years ago

Is there any plan to implement this?

dkamakin commented 2 years ago

Is the work still in progress?

sdonovanuk commented 2 years ago

@dkamakin -- it can be emulated. Start with your InputStream. Initiate a multi-part upload, and set the part-size. Then, take blocks of your input stream to match the part-size, then upload the part using a ByteArrayInputStream. Repeat until done, then complete the multipart-upload. Remember to abort if anything goes wrong. It's actually not that much code. In my case, am working with a part-size of 5Mb, the smallest allowed -- as the largest file I'm trying to upload in this case is ~15Mb. Yeah, it does mean each thread you have uploading has a 5Mb buffer, but, not terrible.

alexmojaki commented 2 years ago

@sdonovanuk that's what https://github.com/alexmojaki/s3-stream-upload does, I don't think you should recommend that people do it themselves, it's not that easy.

sdonovanuk commented 2 years ago

@alexmojaki -- dunno. The simplest form of a a stream manager would just take: bucket, key, InputStream -- and nothing else, except maybe a buffer specification.

djchapm commented 2 years ago

@alexmojaki - great that you wrote code to work around a limitation of the AWS SDK - but please stop promoting as a solution - really needs to be part of the AWS SDK, supported by AWS which is preferable as things change. Might be better if you participate in the V2 AWS SDK on github and submit a PR. No way I can put a dependency on "com.github.alexmojaki" library from github. API would be much simplified if TransferManager could support streaming.

rpiccoli commented 2 years ago

@djchapm , this is a very negative statement for a successful OSS contribution (173 stars for a small module, as you said, from 'com.github.alexmojaki'). The effort to work around AWS SDK limitation was shared for free to the community, and it is an actual solution for at least 173 products around the world. I previously added dependency to 'com.github.alexmojaki' successfully, what never broke any compliance with security or licensing. I am sure the authors does not have any objection for their code to be added to AWS official SDK, that's why he uses MIT license.

djchapm commented 2 years ago

Hey @rpiccoli and @alexmojaki - not meant to offend at all. It is great work! Not sure how you took that so negatively. We have been doing something similar. I've been following this issue #474 for a couple years now - lately I saw that it's on the agenda to be included in AWS SDK v2 TransferManager which is awesome - it links to this issue and coming here I see more recent comments from alexmojaki about his contribution as a solution - which I've seen on many threads about this topic. I simply don't want it to detract from the promise of a more native and supported solution that takes advantage of the AWS async/bandwidth improvements in v2. Writing to S3 is currently our biggest performance bottleneck - by far. Having to manage parts and flush in mem buffers requires a bit of overhead that could be done better if we can get a little closer to the networking side of the aws connection. I am not sure why AWS doesn't pull the trigger on implementing this - this ticket is over 5 years old and so many people have this issue - like a commenter mentions above - "C'mon AWS, git it done.".

rpiccoli commented 2 years ago

Now this is a positive and constructive statement! Indeed I totally agree with you @djchapm: "c'mon AWS, get it done!". I would happily shift to non-blocking async client.

djchapm commented 2 years ago

Hey - still tracking this one - did a quick demo based on what I'm seeing in the v2 transfer manager preview and #2817 and #2814... Couldn't get it to work - same issue reported based on having content-size up front. Digging into the libraries - the fromPublisher ends up constructing a meta request that hardcodes content size to zero - so you'd think this would be intended for a publisher style interface. Here's my code and versions of everything for the test - if we could get this to work then I think most everyone's related feature request would be resolved.

Tagging @zoewangg and @Bennett-Lynch for any insights.

/**
 * Java 17
 * s3-transfer-manager-2.17.210-PREVIEW.jar
 * sdk-core-2.17.210.jar
 * reactor-core-3.4.13.jar
 * Native CLIv2 on MacOS monterey: 
 *    aws-cli/2.7.7 Python/3.9.13 Darwin/21.5.0 source/x86_64
 */
class AWSTransferManagerTest {

    public static void main (String[] args) {
        S3ClientConfiguration s3TransferConfig = S3ClientConfiguration.builder()
                .targetThroughputInGbps(20.0)
                .minimumPartSizeInBytes(1024L)
                .build();
        S3TransferManager transferManager = S3TransferManager.builder()
                .s3ClientConfiguration(s3TransferConfig)
                .build();
        Flux<ByteBuffer> flux = Flux.just("one", "two", "three")
                .map(val -> ByteBuffer.wrap(val.getBytes()));
        //verify flux:
        //flux.subscribe(System.out::println);

        Log.initLoggingToFile(Log.LogLevel.Trace, "s3.logs");
        //output in s3.logs:
        // [INFO] [2022-06-16T15:19:49Z] [00007000056fd000] [S3Client] - id=0x7fab24928d20 Initiating making of meta request
        // [ERROR] [2022-06-16T15:19:49Z] [00007000056fd000] [S3MetaRequest] - Could not create auto-ranged-put meta request; there is no Content-Length header present.
        // [ERROR] [2022-06-16T15:19:49Z] [00007000056fd000] [S3Client] - id=0x7fab24928d20: Could not create new meta request.
        // [WARN] [2022-06-16T15:19:50Z] [0000700005d0f000] [JavaCrtGeneral] - Not all native threads were successfully joined during gentle shutdown.  Memory may be leaked.

        Upload upload =
                transferManager.upload(UploadRequest.builder()
                        .putObjectRequest(b -> b.bucket("bucket").key("tmp/flux.bin"))
                        .requestBody(AsyncRequestBody.fromPublisher(flux))
                        .overrideConfiguration(b -> b.addListener(LoggingTransferListener.create()))
                        .build());
        CompletedUpload completedUpload = upload.completionFuture().join();
    }
}

Output in console:

Caused by: software.amazon.awssdk.crt.CrtRuntimeException: S3Client.aws_s3_client_make_meta_request: creating aws_s3_meta_request failed (aws_last_error: AWS_ERROR_INVALID_ARGUMENT(34), An invalid argument was passed to a function.) AWS_ERROR_INVALID_ARGUMENT(34)
    at software.amazon.awssdk.crt.s3.S3Client.s3ClientMakeMetaRequest(Native Method)
    at software.amazon.awssdk.crt.s3.S3Client.makeMetaRequest(S3Client.java:70)
    at software.amazon.awssdk.services.s3.internal.crt.S3CrtAsyncHttpClient.execute(S3CrtAsyncHttpClient.java:97)
    at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.doExecuteHttpRequest(MakeAsyncHttpRequestStage.java:175)
    at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.executeHttpRequest(MakeAsyncHttpRequestStage.java:147)
    at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$execute$1(MakeAsyncHttpRequestStage.java:99)
    at java.base/java.util.concurrent.CompletableFuture.uniAcceptNow(CompletableFuture.java:757)
    at java.base/java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:735)
    at java.base/java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:2182)
    at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:95)
    at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:60)
    at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
    at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:55)
    at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:37)
    at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.attemptExecute(AsyncRetryableStage.java:144)
    ... 24 more
djchapm commented 2 years ago

There is hope for this yet in https://github.com/aws/aws-sdk-java-v2/issues/139. I'll stop watching this one.

djchapm commented 1 year ago

@david-at-aws - this functionality is now present in aws sdk v2 - see above ticket. I think this can finally be closed/completed.

debora-ito commented 2 months ago

Java SDK 2.x TransferManager supports uploads with unknown content-length. Read more about it in the Java SDK 2.x Developer Guide, and open a new issue in the v2 repo if you have any issue or feedback.

Reference:

github-actions[bot] commented 2 months ago

This issue is now closed.

Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one.