Azure / azure-storage-java

Microsoft Azure Storage Library for Java
https://docs.microsoft.com/en-us/java/api/overview/azure/storage
MIT License
189 stars 163 forks source link

Retry operation with new SAS token #494

Closed ikryvorotenko closed 5 years ago

ikryvorotenko commented 5 years ago

Which service(blob, file, queue, table) does this issue concern?

Blob

Which version of the SDK was used?

Please note that if your issue is with v11, we are recommending customers either move back to v11 or move to v12 (currently in preview) if at all possible. Hopefully this resolves your issue, but if there is some reason why moving away from v11 is not possible at this time, please do continue to ask your question and we will do our best to support you. The README for this SDK has been updated to point to more information on why we have made this decision. v8

What problem was encountered?

I'm looking for a functionality, which would allow me to retry azure operation with updated authentication.

The problem is following: having some SAS token, I'm calling some azure blob API method. And in case when the azure operation takes more time than SAS token validity I'm getting 403. Is there a way to hook the 403 response on some operation and retry it with new SAS token (which I obtain from 3rd-part service)?

Have you found a mitigation/solution?

Nope

rickle-msft commented 5 years ago

Before I get into hooking into the retry policy, is there a reason why it's insufficient to just catch the 403 and then reset the credential before trying the operation again?

ikryvorotenko commented 5 years ago

the problem is that 403 is returning under the hood of API methods. For example, when I'm using blobContainer.openOutputStream() and read data from stream for some time. Meanwhile if the token expires, I only get the exception from stream.read method, and so I have to read it from scratch.

I'm looking for something which would allow me to hook the 403 inside openOutputStream (where azure api calls are happening)

rickle-msft commented 5 years ago

You mentioned output streams, but also mentioned reading, so I'll just address both. I think for reading, this shouldn't be as much of an issue. There should be an overload to openInputstream that takes a length and an offset, so if you get a failure in your read, you can just open up the stream again from where you left off.

Writing is a bit trickier because you'll lose the block list if the stream fails while you're writing, so you would indeed have to restart. Unfortunately in v8 I don't think there's a way that you can update the url in between retries. There is an event fired before sending a request and after receiving a response, but I don't believe that it gives you any means of affecting the request/response/retry behavior. Therefore, I think I'd have to recommend that you discuss with the third party about increasing the sas duration. There might be a way to do this in v12 if you are interested in that preview. I can investiaget that further if you'd like

ikryvorotenko commented 5 years ago

Thanks for covering both scenarios. I don't think v12 would be acceptable for us as it's not released yet. Though if you have a solution, i'd be happy to check it later.

Do you have any information about v12 release plan?

rickle-msft commented 5 years ago

As an extra bit of information, you can maybe use this to help determine what your sas duration should be. Perhaps the third party can accept an operation type and size or something and adjust the duration of the sas it gives you based on how long the operation could take at a maximum.

Unofficially, I'll say that our projected timeline is preview 3 next week, preview 4 about a month after that, and GA is supposed to be one month after preview 4. It's not really up to me to set those timelines, but maybe that gives you a rough idea of what's feasible.

ikryvorotenko commented 5 years ago

Thanks for your help, appreciate that.