Closed kosmastsk closed 3 years ago
Sorry for the late reply.
The mapping of classes from SemanticKITTI for learning is handled in https://github.com/PRBonn/lidar-bonnetal/blob/master/train/tasks/semantic/config/labels/semantic-kitti.yaml
. Here you can modify the learning_map
to join classes.
If you join classes from a pre-trained model trained with all classes, the simplest way would be to simply sum the confidences after the softmax from the individual classes as the overall confidences are normalized. But then you assume implicitly that the classifier is able to classify all classes equally good or bad. However, this is usually not true and therefore a weighting of the confidences would make sense to account for this fact.
In case you do not want to train from scratch with the joined classes (use a pre-trained model with all classes), the simplest way would be to replace the classification head, such that you have an output with the joined classes, and finetune it with the new classification head and the reduced classes, where you account for joining of classes in the loss.
Hope that helps.
@jbehley thank you for your answer, this solved my question. I can keep this issue up to date when I'll apply what you suggested for training a model with less classes. I'm closing this issue for now
What is the proper way for reducing the classes used by the network during the training and the inference? Specifically, if I need to have one class containing more classes (e.g. a class 'vehicle' containing all the objects from 'car', 'bicycle' etc classes).
How will this change affect the inference process if the training has been done with all classes?