axieinfinity / ronin

A DPoS blockchain.
GNU Lesser General Public License v3.0
67 stars 30 forks source link

Ronin Archival Snapshot #428

Closed shaswatsaloni closed 4 months ago

shaswatsaloni commented 7 months ago

Hi All, Can anyone provide me the link to download an archival snapshot for mainnet starting from Block 1.

Thanks, Saloni

qui-pham commented 7 months ago

Certainly, please find the link below to download the archival snapshot for the mainnet starting from Block 1:

https://github.com/axieinfinity/ronin-snapshot?tab=readme-ov-file#chaindata-snapshot---archive-node

If you encounter any issues or require further assistance, please do not hesitate to let us know.

shaswatsaloni commented 6 months ago

okay got it, but do we need to download all the files for it to get the data from block 0?

minh-bq commented 6 months ago

yes, you need to download all the files.

shaswatsaloni commented 6 months ago

Hi, @minh-bq , i downloaded the prerequisite that is zstd, provided 8tb of disk size and tried to download only the first file.

Command used: image

But running into some error after it downloaded around 117GB of files. Attaching the screenshot of the error here:

image

Could you please help me with this?

Thanks, Saloni.

minh-bq commented 6 months ago

According to the guide in README, I believe that you must download all the files and concatenate them before decompressing.

shaswatsaloni commented 6 months ago

ok, @minh-bq i tried only one command at a time which is:

wget -O chaindata.tar.zst "https://ss.roninchain.com/archive-mainnet-chaindata-20240306.tar.zst-000"

but still after 18% completion, got this error.

image

Could you please help me with this?

Thanks, Saloni.

minh-bq commented 6 months ago

This might be because of unusual network error between your local and server. Normally, wget will disconnect or have a limit times to retry. You can try to add these flag to your command --retry-connrefused (retry) and --continue (to keep download from the last byte), --tries=0 (for infinite retry, default is 20 times). For example:

wget --retry-connrefused --continue --tries=0 -O chaindata.tar.zst "https://ss.roninchain.com/archive-mainnet-chaindata-20240306.tar.zst-000"

Reference: https://linux.die.net/man/1/wget

shaswatsaloni commented 6 months ago

Hi, @minh-bq , I tried the above command. But it is stuck at 3% from around 4-5 hours.

Attaching screenshot: image

And one more quick question: I am using ubuntu 18.04, will it be enough or we need ubuntu 20.04 or 22.04?

Thanks, Saloni.

shaswatsaloni commented 6 months ago

Checking on this again? Can anyone help me with this?

minh-bq commented 6 months ago

Hi, can you retry (Ctrl+C then run command again)? It should start from where it is left off not from 0%.

tudoanm commented 6 months ago

First of all, I have thought that you used a mistaken section of the docs for this download chaindata snapshot.

image

☝️ Above is the correct part, which at the end of README. The script is for downloading 13 different files. It also means that, the -O chaindata.tar.zst in your command is not correct. It should be -O chaindata.tar.zst-00X (X is the index of each file). Cause you will need to combine those files later.

Secondly, I think that the scripts is just an example of usage. So don't forget to custom the wget command in script to get a retry as minh-bq has mentioned, --retry-connrefused --continue --tries=0 . Despite any error from networks (like in your case), you can cancel the download any time and re-enter the command, it will successfully continue to download. Hope this help!

shaswatsaloni commented 6 months ago

Hi, thanks for this, one more thing. Do i need to put all files to download at once or can i try with only one file to see if that works for me?

minh-bq commented 6 months ago

Currently, you have to download all files, concatenate them then decompress. We'll try to improve that in the future.

tudoanm commented 6 months ago

If i am not wrong, you are asking whether to download them all at once or one by one ? If so, yes, you could download one by one, but notice the file name to not overwrite the downloaded file by the new one. However, when decompressing, all of the files must be concatenated at the same time.

shaswatsaloni commented 6 months ago

Hi, I am able to download all the tar files, now working on untar. Thats why still this ticket is opened. Once that is done. I will close this.

Thanks, Saloni

shaswatsaloni commented 6 months ago

Hi @minh-bq , i downloaded the tar files that was completed. But while doing the untar it with the command:

tar -I zstd -xvf chaindata.tar.zst

The size that is there is around 7TB. It ran for 24 hours, but in the end it says something like this:

image

What does this mean? Or should i proceed to run the node on top of this?

Thanks. Saloni

minh-bq commented 6 months ago

With this error, sadly, your downloaded file might be corrupted. @qui-pham can you give the hashes of split files so @shaswatsaloni can find out which split file is corrupted?

qui-pham commented 6 months ago

You can ignore this error and start the archive node. The error is caused by tar and split files - we'll have new updates in the new version. If you have errors when starting node, please ping for us to help.

minh-bq commented 4 months ago

Close as it seems to be resolved. Feel free to re-open if the problem still exists.