Closed yinlingluo closed 5 years ago
check out this https://github.com/Sekunde/3D-SIS/tree/master/datagen you need to run our C++ code to generate .scene file and world2grid.txt file.
@Sekunde Can u please suggest the proper steps to train the model on my own custom dataset which contains RGB images, depth,.ply files,pose, label
You need to generate voxels data (signed distance field) from Depth Images and your annotations (example see https://github.com/Sekunde/3D-SIS/blob/master/datagen/ScanReal/src); you can also generate voxel data from triangle mesh (.ply) (see https://github.com/christopherbatty/SDFGen).
General Introduction to our data generation, see https://github.com/Sekunde/3D-SIS/tree/master/datagen. For RGB images, you can train a 2d semantic network, e.g. ENET.
Basically, our code takes in the data from data loader that has format presented in https://github.com/Sekunde/3D-SIS/blob/master/lib/datasets/dataset.py (or you can write your own data loader, the output of dataset.py should be identical to ours, then you can just reuse the rest of our code)
Ok
Thank you .
On Fri, Mar 13, 2020, 2:55 PM Ji Hou notifications@github.com wrote:
You need to generate voxels data (signed distance field) from Depth Images and your annotations (example see https://github.com/Sekunde/3D-SIS/blob/master/datagen/ScanReal/src); you can also generate voxel data from triangle mesh (.ply) (see https://github.com/christopherbatty/SDFGen).
General Introduction to our data generation, see https://github.com/Sekunde/3D-SIS/tree/master/datagen. For RGB images, you can train a 2d semantic network, e.g. ENET.
Basically, our code takes in the data from data loader that has format presented in https://github.com/Sekunde/3D-SIS/blob/master/lib/datasets/dataset.py (or you can write your own data loader, the output of dataset.py should be identical to ours, then you can just reuse the rest of our code)
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Sekunde/3D-SIS/issues/8#issuecomment-598628574, or unsubscribe https://github.com/notifications/unsubscribe-auth/AF3TKVFPCEWKFPZXP7DB5LLRHH3XRANCNFSM4IGPZVUQ .
Hello!
We are trying use your model on our own dataset, while we came across some problems about the data we need to prepare as input. We downloaded data for some scenes from ScanNet v2, and run datagen/prepare_2d_data.py on them. However, we only color/depth/pose/label from that. We are quite confused about : 1)what do .scene files in scannet_example_data/scenes contain? 2)How can we generate those .scene files for our own dataset? 3)How should we construct the matrix in world2grid.txt in scannet_example_data/images?
Thank you!