Open canglangzhige opened 2 years ago
I have a few other questions:
Bayesian update is used to 3D fusion I think. The semantic point cloud is generated by semantic and RGB image, then the framework will do raycasting and update the label in each voxel using Bayesian update.
In the paper it is mentioned "we use a Bayesian update to update the label probabilities at each voxel". But in the experimental part, I saw that the point cloud was directly generated by the segmented RGB image and depth image, and then passed to voxblox for semantic reconstruction, and "Bayesian update" was not used. What I understand is that the experiment Part of it just replaces ordinary RGB images with semantic images, and does not perform semantic fusion operations. Is there any experiment in this area?