There is the upload limit 5 Tb as described in the Readme. But in my opinion the current implementation is not optimal for large files. We process the request from the browser, copy the data into the local file and then upload the file into S3 in the one request. I think it won't work good for a too big files. I suggest to make some upload process improvements.
we have to create a new implementation of Takes:RqMultipart e.g RqMultipartBuf, which should take a ByteBuffer into ctor and call a method e.g. flush() on each iteration.
we should create a decorator e.g. AwsRqMultipartBuf which will override flush() and upload the part from ByteBuffer into S3 using UploadPart
maybe we should add the multipart upload ability for jcabi-s3 and use it on the step 2
To be honest I have my doubts that it is possible to upload 5Tb through a browser...
Anyway my suggestion should much improve the performance If multiple clients are simultaneously upload relatively large files(hundreds of megabytes)
There is the upload limit 5 Tb as described in the Readme. But in my opinion the current implementation is not optimal for large files. We process the request from the browser, copy the data into the local file and then upload the file into S3 in the one request. I think it won't work good for a too big files. I suggest to make some upload process improvements.
RqMultipartBuf
, which should take a ByteBuffer into ctor and call a method e.g.flush()
on each iteration.AwsRqMultipartBuf
which will overrideflush()
and upload the part from ByteBuffer into S3 using UploadPartTo be honest I have my doubts that it is possible to upload 5Tb through a browser...
Anyway my suggestion should much improve the performance If multiple clients are simultaneously upload relatively large files(hundreds of megabytes)