When I experiment with shapenet's full point cloud data, the segmentation accuracy is often as good as described in the paper. However, when I deployed the network to my project, I found that when the point cloud information is incomplete due to a single camera perspective, the segmentation effect is so poor that it cannot be used. Is there any way to improve the segmentation accuracy in the case of incomplete viewing angles? I haven't found any outstanding research in this area, maybe you have any engineering method to solve it?
Figure 1 is the part-segmentation effect when the full point cloud data of shapenet is used as input, and Figure 2 and Figure 3 are the part-segmentation results of images collected by a single camera.
When I experiment with shapenet's full point cloud data, the segmentation accuracy is often as good as described in the paper. However, when I deployed the network to my project, I found that when the point cloud information is incomplete due to a single camera perspective, the segmentation effect is so poor that it cannot be used. Is there any way to improve the segmentation accuracy in the case of incomplete viewing angles? I haven't found any outstanding research in this area, maybe you have any engineering method to solve it?
Figure 1 is the part-segmentation effect when the full point cloud data of shapenet is used as input, and Figure 2 and Figure 3 are the part-segmentation results of images collected by a single camera.
![Figure3](https://github.com/yanx27/Pointnet_Pointnet2_pytorch/assets/100884149/f4ea51da-0f56-43e3-a75e-1a8bd3850d4b)