Pointcept / PointTransformerV2

[NeurIPS'22] An official PyTorch implementation of PTv2.
357 stars 25 forks source link

Testing of the Scannet dataset #36

Closed JJBUP closed 1 year ago

JJBUP commented 1 year ago

hello, in the semantic segmentation of 20 categories in Scannetv2, the points that are not in 20 categories we treat as background categories, there are labels in training and validation, so we can control the training by ignore_label, label = -1 to ensure the exclusion of background categories in the visualization results, but in the test dataset without labels, how can we do that? Since the output channel is 20 instead of 21, the background class in the test will be one of the 20 classes, won't this cause an error?

looking forward to your reply ! 🙂

Gofinge commented 1 year ago

Hi, the background label is different from the ignored label. To classify the background label, you need to specify an additional index to the background. Consequently, it should be both 21 classes sem seg for ScanNet train & test.

JJBUP commented 1 year ago

Thank you for your reply. The labels of points in non-20 classes are set to -1 after scannet preprocessing, and in your latest pointcet repository, num_classes and ignore_label are set to 20 and -1 respectively in the scannet configuration file, the above makes me think that scannet only trains and tests on 20 classes, So according to your reply, we are actually predicting 21 classes, and the model output channel is 21, but the classes with label -1 are excluded in the oa/miou/oacc calculation, right?😊

Gofinge commented 1 year ago

There is no background label in our framework and so does the ScanNet benchmark, I just assume the case you mentioned that you need additional background classification.

JJBUP commented 1 year ago

I see what you mean, but I still want to confirm that the model is actually predicting 21 classes, but only calculating oa/miou/macc for 20 classes?🤞

Gofinge commented 1 year ago

We only predict 20 classes as all existing models. Maybe the following config can solve your problem.

JJBUP commented 1 year ago

Yes, I have looked at the config and I understand that when the data has labels, we are able to exclude points outside of the 20 categories during training and testing, but in the unlabeled test, points that are not in the 20 categories will also be predicted to be predicted to one of the 20 categories, how do I handle this situation?Sorry again for troubling you.

haibo-qiu commented 1 year ago

Hi @JJBUP,

Actually, the running script on the ScanNet server has already handled your concern.

As you can see this line in the official toolkit, only points with the valid labels are included in the IOU calculation.