WangYueFt / dgcnn

MIT License
1.62k stars 420 forks source link

Potential discrepancy between training and testing for part segmentation #56

Closed imankgoyal closed 4 years ago

imankgoyal commented 4 years ago

Dear Wang,

I really liked your paper and thanks for sharing your code. I think there is a potential discrepancy between the training and test setup for part segmentation. It would be great if you can please have a look and clarify a few doubts I have.

Looking forward to your response. Best, Ankit

WangYueFt commented 4 years ago

@syb7573330 Can we look at if there is a bug in the segmentation implementation?

imankgoyal commented 4 years ago

Hi @WangYueFt and @syb7573330, I was wondering if you got a chance to look into the issue.

syb7573330 commented 4 years ago

I will check later. Thanks

imankgoyal commented 4 years ago

Hi @syb7573330 , I had one other question which is in continuation to https://github.com/WangYueFt/dgcnn/issues/8 issue.

Can you please confirm what do you exactly mean by "the best results during the training process". It looks like a model is saved after every 5 epochs (40 times during the training process). So is the final reported test set result max among all the 40 saved models? Also, am I right in assuming that the metric over which you take max is mean Instance IOU?

https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/part_seg/train_multi_gpu.py#L381

Thanks!

syb7573330 commented 4 years ago
  1. We used the same training & testing code and preprocessed data as in PointNet for fair comparison. Please check their code and data.

  2. "pc_augment_to_point_num()" is for padding purpose. You are right, when detecting neighbors, duplicated points may be included. But the number of duplicated points should be very small compared to the total number of neighbors, so I think this effect is minor.

  3. You are right. Please see detailed calculation in part_seg/test.py