Closed QingyongHu closed 5 years ago
The results are evaluated on the test set.
What do you exactly want to have? The numbers to generate the plot or do you want to have such an evaluation?
Yes, I would like to have an evaluation like this, and compare our approach to these baselines. However, I can only evaluate my method in the validation set currently, and don't have the results of all these baselines.
Many thanks!
Sorry for the delay. We had to discuss some options.
Since there is no sense in providing the test set number for the plot and we will not do the distance-based evaluation on the Codalab servers, we can only provide the option to use the evaluate_semantics_by_distance.py with your prediction for the validation set and the predictions of squeezeseg, squeezeseg2, and our extended versions (also the best performing) darknet53. You can even compare it with our extension using the kNN [1].
You can find all predictions for the validation sets in PRBonn/lidar-bonnetal of different approaches.
Hope that helps.
[1] A. Milioto et al. RangeNet++: Fast and Accurate LiDAR Semantic Segmentation, IEEE/RSJ Intl.~Conf.~on Intelligent Robots and Systems (IROS), 2019.
Thanks for your reply. I will try to do that.
Let me know when you need further information.
I close the issue.
Thanks for your excellent work. I am writing to ask the Figure 4 in the paper is based on the result of the validation set or testing set? Can you share the distance IoU results of all baselines?
Many thanks!