Closed eriche2016 closed 7 years ago
Thanks @eriche2016 for your questions
I'll leave (1) to Kaichun @daerduoCarey As to (2) evaluation should be on the original sets of points which do have variable numbers. We can set a MAX_NUM_POINT and use that for evaluation and if there are less than those points we can replicate existing points (won't affect max pooled result) and only look at predictions at the first N valid points
@daerduoCarey could you also comment on them?
Hi @eriche2016 ,
For question (1), we assume that we know the object category for the object that we are segmenting. So, we only allow the prediction to be the semantic labels that belong to the known object class. For example, for a given chair, we only allow the network to predict "chair legs", "chair seats", etc. and disable it to predict "table legs" and "airplane wings", etc. So when calculating IoU, we are using cur_gt_label
to select out the semantic labels that are allowed to predict. We mentioned this point in the paper page 10 Appendix C PointNet Segmentation Network third paragraph.
Hope this answer your question, thank you!
Thank you very much for your quick answer. It is clear to me now.
Hi, Excellent work. And After reading your code on evaluation of part segment code, i find that (1) the evaluation part (these lines) uses cur_gt_label (which is groundtruth) to calculate the iou_oids, so why donot you use the predicted labels to evaluate the iou?? or just simply calculated the mask by argmax on seg_pred_res here and then compared it to groundtruth segment mask. (2) i also find that you donot use the
.h5 test files
but use the folders which contains test point cloud data to evaluate. Based on my observation, .h5 file uses point cloud data which has 2048 points for each test examples. However, in the test folder, each item has variable number of points. which one is the correct on to evalute the model? can you give some tips.