Closed jlester-msft closed 2 years ago
Hi @jlester-msft ! I was unable to repro your scenario. I tried uploading from ADLS gen 2 (with hierarchical namespace enabled) to a v1 storage account and it was successful for me. From the logs it seems that the tier level is preventing the blobs from being copied. Do you mind trying again and setting --s2s-preserve-access-tier to false? Please let me know how it goes!
Hi @siminsavani-msft, adding in "--s2s-preserve-access-tier=false" allowed me to successfully transfer from the DataLake to the Storage Account. The "Failed to create one or more destination..." message still appears but it is still able to complete the transfer.
PS E:\projects> .\azcopy copy "https://dataset1000genomes.blob.core.windows.net/dataset/data_collections/1000_genomes_project/data/ACB/HG01879/exome_alignment/HG01879.alt_bwamem_GRCh38DH.20150826.ACB.exome.cram?$datalake_sas_token" "https://wdltestcf6818421e7f.blob.core.windows.net/inputs/HG01879.alt_bwamem_GRCh38DH.20150826.ACB.exome.cram?$wdl_sas_token" --s2s-preserve-access-tier=false
INFO: Scanning...
INFO: Failed to create one or more destination container(s). Your transfers may still succeed if the container already exists.
INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support
Job 47dc852c-22c8-604a-421b-ff71fe427770 has started
Log file is located at: C:\Users\jlester\.azcopy\47dc852c-22c8-604a-421b-ff71fe427770.log
99.9 %, 0 Done, 0 Failed, 1 Pending, 0 Skipped, 1 Total,
Job 47dc852c-22c8-604a-421b-ff71fe427770 summary
Elapsed Time (Minutes): 0.3344
Number of File Transfers: 1
Number of Folder Property Transfers: 0
Total Number of Transfers: 1
Number of Transfers Completed: 1
Number of Transfers Failed: 0
Number of Transfers Skipped: 0
TotalBytesTransferred: 6611021194
Final Job Status: Completed
When you were unable to repo the issue did the transfer work successfully without adding in "--s2s-preserve-access-tier=false"?
After digging into this a bit more the issue comes down to the fact that the specific destination Storage Account being used here, wdltestcf6818421e7f, is a v1 Storage Account with its Access Tier set to disabled. Anyone with the same issue can check their "Default access tier" by going to the Azure Portal, going to the Storage Account, and going to the Overview page and seeing if there's a "Default access Tier" listed under "Blob service".
If it shows a default of 'Hot' you shouldn't encounter this issue. If it doesn't have a "Default access tier" listed you have three options: 1) Use --s2s-preserve-access-tier=false 2) Upgrade to a v2 Storage Account 3) See if you can change your v1 Storage Account to not have a disabled access tier, i.e. default to Hot
Likely 1) is the best solution.
Thanks!
Which version of the AzCopy was used?
azcopy version 10.13.0
Which platform are you using? (ex: Windows, Mac, Linux)
Windows PowerShell
What command did you run?
What problem was encountered?
Azcopy starts the copy (this is a 6GB file) with a warning: "Failed to create one or more destination container(s). Your transfers may still succeed if the container already exists". Attempts the copy and then fails. Looking at the log output it shows a "403 This request is not authorized to perform this operation." response error at the top. Which is confusing because the same operation works if you copy from a different source like the local file system. So it did not seem like an authorization error.
Checking further in the log file shows the true error at the bottom: "400 Blob access tier is not supported on this storage account type." Which seems to be what is preventing the transfer from happening. azcopy is setting "X-Ms-Access-Tier: [Hot]" and the response it's getting back is that this is a v1 Storage Account which does not support access tiers.
There's also a more verbose error message with the true failure: 400 Blob access tier is not supported on this storage account type.. When Committing block list
If you attempt to run the same command with --block-blob-tier "None" --page-blob-tier "None" it still sets the access tier to Hot and the transfer fails. So there seems to be some bug with azcopy not realizing that the destination is a v1 Storage Account which doesn't support access tiers and/or that azcopy doesn't honor the specified block-blob-tier/page-blob-tier
Console output:
First error message inside the log file:
Log message at the end of the log:
How can we reproduce the problem in the simplest way?
Source:
Destination:
Log files:
2a578b5e-d116-4c41-485c-40284e5dbddc-scanning.log 2a578b5e-d116-4c41-485c-40284e5dbddc.log
Have you found a mitigation/solution?
Only by copying the file locally (DataLake gen 2 to local file system) and then uploading the file from the local drive to the Storage Account (local file system to v1 Storage Account)