gulvarol / surreal

Learning from Synthetic Humans, CVPR 2017
http://www.di.ens.fr/willow/research/surreal
Other
588 stars 106 forks source link

how to download the surreal_v1 dataset ? #47

Closed wongkwan closed 3 years ago

wongkwan commented 4 years ago

I used the link you provided to download the SURreal_V1 dataset, but the speed was too slow. Could you please tell me how to download it faster?

gulvarol commented 4 years ago

Could you tell me more precisly what is too slow, estimated minutes/hours? The zip file is 85GB, the speed depends on your internet connection. If you do not need all the data, you could do a partial download as explained in the readme. Find here the size of different parts of the data: https://github.com/gulvarol/surreal#4-storage-info

wongkwan commented 4 years ago

Thank you very much for your reply. I've got all the data. For depth information, is there only two depths in the whole body?Shouldn't every pixel in the body have a depth like the one in the paper?Thank you very much if you can help me !

gulvarol commented 4 years ago

Sorry I'm not sure I understand the question. There is a single depth map per video frame (=1 value per pixel).

MaxGodTier commented 3 years ago

I was downloading this at around 350KB/s under a 1Gbps connection, after 2 days it interrupted at 42% and my download manager says that your server doesn't support resuming, needless to say I'm livid now, not even considering to enable resuming on a 86GB file with such a slow internet connection is like asking for trouble.

gulvarol commented 3 years ago

I would recommend downloading using the download scripts provided instead of the zip file if you are afraid of failed connection midway. That would download individual files one by one in a loop, then you can resume. You might need to add one line that skips the download if the file already exists.

MaxGodTier commented 3 years ago

The drawback of the script solution is that downloading 275GB of uncompressed files at 300-350KB/s will take approximately 10 days (IF things are going perfectly smooth) versus 3 days of a 86GB archive containing all the files, the patience of a saint is required to download this dataset.