charlesq34 / pointnet

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Other
4.66k stars 1.44k forks source link

Semantic segmentaion inference with different transformation #212

Open morimkb opened 4 years ago

morimkb commented 4 years ago

Hi,

I have trained the sem_seg network with S3DIS dataset and prepared my data collected from stereo camera. I used the same annotation process to create a labelled .npy file to feed the network. First time that I got the inference it was about 45% with a different rotation than the one used in S3DIS, after changing the rotation of my point cloud to be the same as S3DIS, my result improved significantly to 65%. The paper claims that the network wont be affected by transformation of the objects but how my results are so different? I used the segmentation network that does not contain the T-net and rotate the whole scene with segmented objects.

My other question is about improving my inference result, I basically did not use any kind of filter or noise removal algorithm to avoid having a synthetic point cloud but I want to know if there is a way to do it properly? Should the model be able to predict a Sofa by only having the point cloud of one angle for example locating the camera at 30 degree and collecting point cloud? In my case, it recognizes the Sofa as clutter. I basically get good result for Floor and Wall but not for Sofa or Table.

madinwei commented 1 year ago

@morimkb , Hi there, I am also interested in implementing inference segmentation. would mind providing me with any details on how to make the sem_seg.py train and then use the inference? any info will help