andyzeng / tsdf-fusion

Fuse multiple depth frames into a TSDF voxel volume.
http://andyzeng.github.io/
BSD 2-Clause "Simplified" License
728 stars 134 forks source link

redkitchen incomplete #10

Open lo2aayy opened 6 years ago

lo2aayy commented 6 years ago

Hi,

I tried reconstructing the whole redkitchen sequence, but the reconstruction didn't look like the one given in the dataset, some parts of the reconstruction are cropped/incomplete. Do you have any idea why?

The picture below is my reconstruction.

screenshot from 2018-01-16 23 07 31

and this picture is the original one redkitchen

andyzeng commented 6 years ago

Hello! Have you tried increasing the size of the volumetric voxel grid? With the default parameters ( voxel_size = 0.006 and voxel_grid_dim = 500x500x500 ), the voxel grid only expands to a 3mx3mx3m area.

lo2aayy commented 6 years ago

Thanks for your reply, But still when I increase the voxel_grid_dim to any size above 1200x1200x1200, I get the following error:

" terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Aborted (core dumped) "

and less than 1200 the construction is still incomplete

I also had another query, is it possible to directly use the binary tsdf volume in the 7scenes dataset (.raw file) with your tsdf2mesh funtion to generate a mesh?

andyzeng commented 6 years ago

The error occurs because the voxel grid is too large to fit into your memory. Since the code creates two voxel grids (one saving distance values and another saving weights for computing running averages), a voxel grid of size 1200x1200x1200 will take 12+GB of RAM and GPU memory.

You'll have to play around with the parameters (voxel_grid_origin_* and voxel_size and voxel_grid_dim_*) to get the same result as the mesh you showed. Try voxel_size = 0.01 to set the size of each voxel to be 1cm, which lowers the resolution but increases the metric coverage of the volume, without increasing memory usage.

The tsdf2mesh in this repository is not out-of-the-box compatible with 7-scenes .raw files. You will need to modify tsdf2mesh to support that.

lo2aayy commented 6 years ago

Thanks but I was trying to use the info in the mhd file of redkitchen in the 7scenes dataset to make a mesh using tsdf2mesh. The mhd file says that the offset is 0 0 3000 and and the element spacing is 11.718750 and that the unit is mm so i converted it to meters and reconstructed the scene. The problem is that now the mesh has a range of 0.48-5.82 in y, 0.0117-6.00 in x and 3.01-7.96 in z. However the poses in the dataset do not fall inside this model now. Could this be because the model is in the camera coordinates and it has to be converted to global coordinates? Also when you swap the two columns while converting the model from voxel coordinates to camera coordinates, dont you have to change the sign?

andyzeng commented 6 years ago

Could this be because the model is in the camera coordinates and it has to be converted to global coordinates?

Correct. The fused model created by demo.cu lies in the camera coordinates of the base frame that you specify (see base_frame_idx in demo.cu). This is different from the models produced by the 7-scenes dataset.

Also when you swap the two columns while converting the model from voxel coordinates to camera coordinates, dont you have to change the sign?

No. The transformation between voxel coordinates and the base camera coordinates (that the model was fused in) should only amount to a translation and a scaling. The swap that occurs in tsdf2mesh.m is only there to account for Matlab's y-first indexing.

lo2aayy commented 6 years ago

Now I can read the raw file given in the redkitchen dataset and create a mesh using the tsdf2mesh, but the thing is, that it’s not compitable with the poses in the redkitchen dataset, when I plot the poses it doesn’t lie on the image, do you have any idea why?

ez4lionky commented 4 years ago

Now I can read the raw file given in the redkitchen dataset and create a mesh using the tsdf2mesh, but the thing is, that it’s not compitable with the poses in the redkitchen dataset, when I plot the poses it doesn’t lie on the image, do you have any idea why?

Do you solve the problem? How can I read the raw file from 7Scenes dataset and generate a mesh?

The python version of TSDF-fusion seems like implementing the automatic voxel volume bounds estimating.