Closed CyrielleAlbert closed 2 years ago
There might be inconsistencies within your choice of parameters.
You set the depth_scale=100
, which is non-conventional for RGB-D images. Which kind of data are you using (e.g. sensor, dataset).
Assuming you are using our convention to obtain depth at the meter unit after applying depth_scale
, then voxel_length=1
gives you huge voxels with 1-meter length. Then you use depth_trunc=0.04
, which rejects any points that are 4cm away from the surface, a threshold way smaller than 1-meter.
So please double check your dataset setup and change parameters accordingly. Usually the only thing you need to verify and customize is depth_scale
.
I managed to find a coefficient of 1/260 in depth_scale that "solved" the problem (I'm not sure yet, that I am getting what I want). I can't find in the documentation how to adapth the parameters depending on the dataset (I checked here). Is there any place in the documentation where it is explained?
This parameter should be given in the sensor/dataset documents. Here are several examples:
So please check the source of your data provider.
I have been following the example "Reconstruction system". I am stuck when trying to make the fragments. I have managed to get point clouds for one RGBD image but can't get a point cloud with severals RGBDImage with the ScalableTSDFVolume. It seems like nothing happen when calling the integrate() function as it returns an empty volume even after the integration of the first RGBDImage. Any idea of where this could come from? Has anyone experienced the same? Here is the code I have (a bit different from the exemple but same problem):
And here are the results for the integration of two RGBDImages: