Closed qorgh346 closed 2 years ago
Hi, I am not quite sure that I understand your question. The segmentation method takes depth maps and outputs temporal consistent segmentations along with the reconstruction. The PLY file taken by the system is to render the depths since the original depth given by the 3RScan is in very low resolution. In your own application, you can write a custom data loader that uses the original depth from any source. But you may also need to adjust the parameters of the InSeg system in order to have a good segmentation output.
I want to operate the fusion system using my data. In order to do that, can you know how to create your own data loader?
Hi, It is a bit tricky to tell you how to create a data loader here... You can follow how I create it for ScanNet and 3RScan under this folder: https://github.com/ShunChengWu/SceneGraphFusion/tree/main/libDataLoader
Basically you can inherit the mother class https://github.com/ShunChengWu/SceneGraphFusion/blob/main/libDataLoader/include/dataLoader/dataset_loader.h override the function.
Is there any way to operate the GraphSLAMGui system without using files such as labels.instances.annotated.v2.ply and mesh.refined.v2.obj using the existing 3RScan dataset?
yes. of course. But the current system requires aligned RGB and depth images (in terms of their intrinsic). Feel free to modify the system accordingly. These two files are used to do get rendered depth.
yes!!! thank you!!
Hello, Question 1. If the RGB image and Depth image are given in the GraphSLAM GUI system, will the segmentation function and scene reconstruction be performed?? If not , is it for drawing using datasets such as extension ply files and mesh files used in learning???