Azure / azure-storage-java

Microsoft Azure Storage Library for Java
https://docs.microsoft.com/en-us/java/api/overview/azure/storage
MIT License
189 stars 165 forks source link

How to upload multipart with java blob client #585

Closed ezienecker closed 1 year ago

ezienecker commented 1 year ago

Which service(blob, file, queue, table) does this issue concern?

blob

Which version of the SDK was used?

v12.22.3 and v12.23.0

What problem was encountered?

I am trying to upload a blob to the Azure portal using multipart upload. I followed this example and implemented it as follows:

val chunkIds = multipartBlob.parts.mapIndexed { index, inputStream ->
    val chunkId = Base64.getEncoder().encodeToString(index.toString().toByteArray())
    this.stageBlock(chunkId, inputStream.buffered(), inputStream.available().toLong())
    chunkId
}.toList()

blockBlob.commitBlockList(chunkIds)

However, this leads to a problem:

com.azure.storage.blob.models.BlobStorageException: Status code 400, "<?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidBlobOrBlock</Code><Message>The specified blob or block content is invalid.
RequestId:97aee5bd-a01e-004a-5152-b9e550000000
Time:2023-07-18T08:36:22.3754340Z</Message></Error>"

I have noticed the following behavior:

Is such a limit known? I could not find anything in the documentation. How can I implement multipart uploads with the Azure Storage Blob client library for Java? Since some blobs that have not exceeded this limit are already stored, I would not like to switch from block blobs to paged or append blobs.

I have also tried this sample without success.

A working instruction would be helpful.

Have you found a mitigation/solution?

No

ezienecker commented 1 year ago

The problem was the chunkId. Referring to the documentation of the "getId" method https://azure.github.io/azure-storage-java/com/microsoft/azure/storage/blob/BlockEntry.html it was surely because the ID probably changed size from a two digit number. After I changed this to a value that is constant in size, it worked.