Open ldepaiva opened 4 years ago
Hi @LucasAntoniassi
Sorry for the late response. V3 applies new data structure different with V2.
If you want a migiration, you can leaverage Azure Tools like "AzCopy" to copy blob data from v2 to v3. For example, hosting v2 blob in port 10000, and v3 blob in port 11000
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs
azcopy copy 'https://127.0.0.1:10000/devstorageaccount1?sv=2016-05-31&sig=SL1tiZVonWXUNfh93EQHCpz5DKYSeie5%2F7jeyK58yeI%3D&st=2018-12-17T06%3A10%3A39Z&se=2020-12-17T06%3A10%3A39Z&srt=sco&ss=bfqt&sp=racupwdl' 'https://127.0.0.1:11000/devstorageaccount1?sv=2016-05-31&sig=SL1tiZVonWXUNfh93EQHCpz5DKYSeie5%2F7jeyK58yeI%3D&st=2018-12-17T06%3A10%3A39Z&se=2020-12-17T06%3A10%3A39Z&srt=sco&ss=bfqt&sp=racupwdl' --recursive
Semetimes people are using Azurite for their testing, so there is no need for a data migration. Are you using Azurite for some persistency data storage?
Hi @XiaoningLiu
Thank you for your response!
I am actually using it as persistent blob storage for a client that did not want to go to the cloud and prefer to run the app on his intranet instead.
I am thinking about migrating them from v2 to v3. Would it be possible to use the azcopy too?
Thank you!
Hi @XiaoningLiu
I attempted to use the azcopy
tool but I ran into some issues.
Firstly, it complains when I don't pass any container:
./azcopy copy "http://127.0.0.1:10000/devstoreaccount1?sig=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==" ./output --recursive --from-to=BlobLocal
Then, I attempted to pass the *
to download everything using this command:
./azcopy copy "http://127.0.0.1:10000/devstoreaccount1/*?sig=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==" ./output --recursive --from-to=BlobLocal
This is the response:
INFO: Scanning...
INFO: NOTE: HTTP is in use for one or more location(s). The use of HTTP is not recommended due to security concerns.
INFO: Any empty folders will not be processed, because the source and/or destination doesn't have full folder support
And the blob logs stay in an infinite loop showing this request:
GET /devstoreaccount1?comp=list&sig=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq%2FK1SZFPTOtr%2FKBHBeksoGMGw%3D%3D&timeout=901 200 0.506 ms - 1176
Can I copy everything from the blob storage to my local file system or to another blob storage without specifying the containers?
Hi LucasAntoniassi,
AzCopy supports copy all contaienrs from one storage account to another. Here is the samples. https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs?toc=/azure/storage/blobs/toc.json#copy-all-containers-directories-and-blobs-to-another-storage-account
Curious why it's not work in your case. Still needs investigation.
In the same time, please enable AzCopy logs and share it here. We can see why the request is inifinit looping. https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-configure?toc=/azure/storage/blobs/toc.json#troubleshoot-issues
Also if you don't have many containers, you can try AzCopy container level copy, to see it works.
Hello XiaoningLiu,
I think I found the issue and this may be an issue with the azcopy
tool.
When I run this command:
./azcopy cp "http://127.0.0.1:10000//devstoreaccount1?sig=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==" ./output --recursive --from-to=BlobLocal
it makes this following request on azurite:
blob-v2_1 | GET /devstoreaccount1/devstoreaccount1?comp=list&include=metadata&restype=container&sig=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq%2FK1SZFPTOtr%2FKBHBeksoGMGw%3D%3D&timeout=901 404 0.623 ms - 141
The issue is that it should not have devstoreaccount1/devstoreaccount1
in the URL as I did not specify any container. It is automatically adding devstoreaccount1
as the container which is wrong.
I tested this query manually using an HTTP Client
tool and I could verify that it works when I replace devstoreaccount1/devstoreaccount1
with devstoreaccount1
.
This is the right URL:
http://127.0.0.1:10000/devstoreaccount1/?comp=list&include=metadata&restype=container&sig=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq%2FK1SZFPTOtr%2FKBHBeksoGMGw%3D%3D&timeout=901
Finally, I think this is not an issue with azurite
but an issue with azcopy
.
Btw, I looked at the logs and there are no logs there. I think it happens because it did not really create any job.
It works fine when you specify the container name or the container + blob name.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi There,
First of all, thank you for your amazing work on this project.
I tried to upgrade azurite blob storage from v2 to v3 but I start to get StorageError NotFound. Looks like v3 does not understand v2 structure. Is it expected? how can I make v3 work with v2 data?
Thank you in advance.