Closed hpc100 closed 6 months ago
@hpc100 Did you find a solution for this? I am also currently looking into excluding the ignored class in the loss- and metric calculations. I might give an update, when I've looked into it.
I have the following 4 classe in my custom dataset: ['background', 'utility_pipes', 'main_pipe', 'ignore']
In order to make PointNeXt not include the ignore class (which has index 3 in my GT labels), I do the following two things:
Modify the init method of ConfusionMatrix class (openpoints/utils/metrics.py): Original:
def __init__(self, num_classes, ignore_index=None):
self.value = 0
self.num_classes = num_classes
self.virtual_num_classes = num_classes + 1 if ignore_index is not None else num_classes
self.ignore_index = ignore_index
Change to:
def __init__(self, num_classes, ignore_index=None):
self.value = 0
self.num_classes = num_classes - 1 if ignore_index is not None else num_classes
self.virtual_num_classes = num_classes
self.ignore_index = ignore_index
Add an ignore_index attribute to default.yaml in the cfg file of my dataset (e.g. cfgs/[datasetname]/default.yaml):
criterion_args:
NAME: CrossEntropy
label_smoothing: 0.2
ignore_index: 3 # <---- Add id of the class that should be ignored
Thank you both! I also updated the following code in "metrics.py" to avoid 100 mIoU on the validation dataset:
def get_mious(tp, union, count):
# iou_per_cls = (tp + 1e-10) / (union + 1e-10) * 100
# acc_per_cls = (tp + 1e-10) / (count + 1e-10) * 100
iou_per_cls = torch.where(union + tp == 0, torch.tensor([0.0], device=tp.device), (tp + 1e-10) / (union + 1e-10)) * 100
acc_per_cls = torch.where(count + tp == 0, torch.tensor([0.0], device=tp.device), (tp + 1e-10) / (count + 1e-10)) * 100
over_all_acc = tp.sum() / count.sum() * 100
miou = torch.mean(iou_per_cls)
macc = torch.mean(acc_per_cls) # class accuracy
return miou.item(), macc.item(), over_all_acc.item(), iou_per_cls.cpu().numpy(), acc_per_cls.cpu().numpy()
For some dataset, we have to use ignore_index parameter in ConfusionMatrix and Loss. In function train_one_epoch :
cm = ConfusionMatrix(num_classes=cfg.num_classes, ignore_index=cfg.ignore_index)
in train_one_epoch line 336 of main.pycm = ConfusionMatrix(num_classes=cfg.num_classes, ignore_index=cfg.ignore_index)
in validate line 391 of main.pyIt seems that, in TRAIN mode IoU for ignore label is 0 and IoU in VAL mode is equal to 100, introducing a gap between TRAIN mIoU and VAL mIoU.
Example for S3DIS (if we introduce ignore_index = 0), we get this IoU :
Example for Semantic3D ( in this dataset ignore_index is set to 0) :
VAL mode : [100. 0. 0. 0. 8.41 0. 0. 0. 0. ] // mIoU : 12.05
Any ideas ? Thank's