Open hao-liu-china opened 5 years ago
Scannetv2 release (1.3TB), my computers don't have that much memory, and it takes a long time to download that much data (my regional network isn't in the U.S.).Somebody help me.
Now that I've solved this problem, "scannetv2_seg_dataset_rgb21c_pointid.py" simply requires "_vh_clean_2.notes.ply" and "_vh_clean_2.ply."So we only need to download these two types of files in "scannet," and they only use 20 gigabytes of memory.
Now that I've solved this problem, "scannetv2_seg_dataset_rgb21c_pointid.py" simply requires "_vh_clean_2.notes.ply" and "_vh_clean_2.ply."So we only need to download these two types of files in "scannet," and they only use 20 gigabytes of memory.
Thank you so much!
Now that I've solved this problem, "scannetv2_seg_dataset_rgb21c_pointid.py" simply requires "_vh_clean_2.notes.ply" and "_vh_clean_2.ply."So we only need to download these two types of files in "scannet," and they only use 20 gigabytes of memory.
hello,how do you download scannet dataset,http://www.scan-net.org/ do not have download link ,
Now that I've solved this problem, "scannetv2_seg_dataset_rgb21c_pointid.py" simply requires "_vh_clean_2.notes.ply" and "_vh_clean_2.ply."So we only need to download these two types of files in "scannet," and they only use 20 gigabytes of memory.
Thank you for your reply ,but in scannet i cannot find _vh_clean_2.notes.ply
, Can you reconfirm the document,thank you again
Now that I've solved this problem, "scannetv2_seg_dataset_rgb21c_pointid.py" simply requires "_vh_clean_2.notes.ply" and "_vh_clean_2.ply."So we only need to download these two types of files in "scannet," and they only use 20 gigabytes of memory.
Thank you for your reply ,but in scannet i cannot find
_vh_clean_2.notes.ply
, Can you reconfirm the document,thank you again
I think he means _vh_clean_2.labels.ply
Can you upload scannet_train_rgb21c_pointid.pickle, scannet_val_rgb21c_pointid.pickle, and scannet_test_rgb21c_pointid.pickle?
Thank you !