Azure / Azurite

A lightweight server clone of Azure Storage that simulates most of the commands supported by it with minimal dependencies
MIT License
1.8k stars 320 forks source link

BlockBlobClient.SyncUploadFromUriAsync produces 0 byte file #852

Open nickharris opened 3 years ago

nickharris commented 3 years ago

Which service(blob, file, queue, table) does this issue concern?

blob

Which version of the Azurite was used?

3.13.1

Where do you get Azurite? (npm, DockerHub, NuGet, Visual Studio Code Extension)

npm

What's the Node.js version?

14.15.0

What problem was encountered?

0 byte file produced when using Azure.Storage.Blobs, BlockBlobClient.SyncUploadFromUriAsync

Steps to reproduce the issue?

npm install azurite@3.13.1

NuGet packages

  <package id="Azure.Core" version="1.15.0" targetFramework="net472" />
  <package id="Azure.Storage.Blobs" version="12.9.1" targetFramework="net472" />
  <package id="Azure.Storage.Common" version="12.8.0" targetFramework="net472" />

simplified code snippet

using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Specialized;
...
                var blobServiceClient = new BlobServiceClient("UseDevelopmentStorage=true");
                var containerClient = blobServiceClient.GetBlobContainerClient("transfertothis");
                await containerClient.CreateIfNotExistsAsync().ConfigureAwait(false);
                var blobClient = containerClient.GetBlockBlobClient("1.0.0.0.zip");
                await blobClient.SyncUploadFromUriAsync(new Uri("http://127.0.0.1:10000/devstoreaccount1/dev/1.0.0.0.zip"), false).ConfigureAwait(false);

start azurite and add a file in container dev called 1.0.0.0.zip. Set dev container permissions to public read blob.

run code and observe that transfertothis/1.0.0.0.zip is created within it but is 0bytes!

If possible, please provide the debug log using the -d parameter, replacing \<pathtodebuglog> with an appropriate path for your OS, or review the instructions for docker containers:

Microsoft employee please reach out to me on teams for logs or screenshare if needed.

-d "<pathtodebuglog>"

Please be sure to remove any PII or sensitive information before sharing!
The debug log will log raw request headers and bodies, so that we can replay these against Azurite using REST and create tests to validate resolution.

Have you found a mitigation/solution?

no

blueww commented 3 years ago

@nickharris Thanks for raising this issue!

"Put Blob From URL" is a known not supported feature of Azurite. See https://github.com/Azure/azurite#support-matrix, "Put Blob From URL" is still not supported.

We will consider to fix this in a future release by reporting error, instead of creating 0 size blob.

nickharris commented 3 years ago

Please ensure where you say report error you do not mean throw or return non success status code that will cause the azure storage sdk to throw. As it is as present I can still develop against what you have its just a I get a 0 byte file and anything downstream that needs that file will not work. While not ideal, while the functionality is not implemented in azurite, the current 0 byte file is better then throwing as i can still make progress without having to write development code to work around if azurite were to throw.

blueww commented 3 years ago

@nickharris

For not supported API, Azurite is designed to report error with 400 status code, which will be return and throw by SDK.

I can understand your concern that report error will make you need additional code to handle the error. But for not supported API, fail fast normally is better then unexpected success, which will take additional efforts in later error investigation.

nickharris commented 3 years ago

ok no probs, i'll just lock to a version of azurite with the current behavior until you add support for the API. Do note that throwing will reduce adoption of those APIs e.g when i hit the same issue with tags i just moved to the less efficient metadata APIs. Would be good if you can add official support for this and the tags APIs to Azurite.

blueww commented 3 years ago

@nickharris Thanks for your understanding!