Closed martinrebane closed 4 years ago
Hello,
Sorry for the delayed response since I'm involved with some other projects recently. I believe your understanding is correct. But this actually doesn't hurt in both training and testing since we are working on voxelized grids in training (where voxels with at least 2 conflicting labels will be ignored) and using the original labels in testing. Hope that answer helps.
Thanks, Haotian
Thanks! I appreciate your answer! :+1:
Hi!
Thanks for your work! SPVNAS is indeed a great tool!
I was playing around with evaluate.py to test on different metrics and wondered why there is always a ~100-200 point difference between the ground truth when using labels from original tensor
all_labels = feed_dict['targets_mapped']
and reconstructing labels fromfeed_dict['targets']
by usingfeed_dict['inverse_map']
?For example (first line is your code in evaluate.py):
I would understand if the difference would be because of
ignore_class
but looks like these are just random classes:print("Diff", targets_mapped[~diff], targets_quick[~diff])
will output something like:
I cannot figure out where the difference comes from? All point clouds have around 100-200 points where original and reconstructed ground truth is different. There is
filtered_labels[counts > 1] = ignore_label
intorchsparse/utils/helpers.py
but this seems to be removing duplicates and replacing with 255 (but I observe random classes). Am I missing something or how to explain this difference?