Closed mohcenaouadj closed 4 months ago
Hi @mohcenaouadj, thank you for your question. For running Co-SLAM on iPhone datasets, here are some tips:
@HengyiWang Thank you so much for your reply, I really appreciate it.
I'm still having the same problem, I fixed the intrinsics and the depth scale, which are provided by the app itself, and double-checked using Colmap, yet even the script vis_bound isn't giving good results, although when I tried it on Replica dataset, the result was perfect, so I was wondering what maybe other parameters that can be considered responsible for this.
Vis Bound result :
Coslam result :
Thanks again for your help !
Hi @mohcenaouadj, can you check the camera pose (c2w or w2c, if it is w2c, you need to inverse it in your dataset class) and its convention (OpenCV or OpenGL, we use GL here)
Hello @HengyiWang,
Yes, it worked just fine, I have several other questions if I may:
How does the model really use GPU memory, in some examples where I have 2000 frames, it uses only 2 GB for example, and others where I have very less frames, like 500, it uses more than 2 GB, what's really behind the use of the memory?
In the context of mesh quality, I have read in another issues thread that the voxel size is about 3 cm, I wonder which voxel were you talking about, SDF or RGB, but in all cases my interest falls into representing small objects where I need to decrease the size of the voxel, so I was wondering where exactly I should make the modification !
Thank you again.
voxel_sdf
and voxel_rgb
are voxel sizes of sdf and color hash grid. By setting oneGrid: True
, you only have one SDF feature grid. We also have voxel_eval
and voxel_final: 0.03
, which are the size used to extract the mesh. You can use a smaller voxel_final to extract mesh with higher resolution. In the meantime, you may want to tune the size of voxel_sdf
to achieve better performance.
Hello,
Great work, I really appreciate everything about this project.
I was wondering concerning the iphone dataset, I have an RGB-D dataset captured by PolyCam on Iphone with image data in shape of 1024768 and depth data in shape 256192.
I'm trying to use the iphone configuration with this dataset yet for some reason I receive some really bad representation and trajectory, I was wondering what are the parts of the configuration that you recommend working on, mainly I only changed the camera size and the intrinsics.
I tried also to use vis_bound script to generate the corresponding bounding box, but the output is always: TriangleMesh with 0 points and 0 triangles.