Closed amiltonwong closed 5 years ago
Hi @amiltonwong,
All results are evaluated on the whole sequence. The difference between multi-scan and single-can experiments is the number of scans used to get this result: one vs. many.
Please have a look at the evaluate_semantics.py
script, the only difference is the configuration file (config/semantic-kitti.yaml
vs. config/semantic-kitti-all.yaml
)
To make it clear:
a. Single scan: You take only scan 245 and predict 245 with this information. We evaluate on 19 classes where we aggregate moving and non-moving.
b. Multi scan: You take information from 245, 244, 243, 242, 241 and predict labels for scan 245. (You can also take more scans, we don't care. Since we only see the results of scan 245.) we evaluate on all classes and distinguish between moving and non-moving.
In both cases, you have to provide labels for each scan.
Hope that clarifies your question.
This seems to be answered. If you still have concerns, then simply reopen the issue.
Hi, thanks for your code.
I wonder how to use multi-scan method, how to train with previous n scans, and how to evaluate the multiple-scan method?
Hi, all,
According to Section 4 - Experiement in paper "SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences", could you clarify "single scan result" and "multiple scan result" a bit further? For "single scan result", e.g. seq 21, it contains 2,721 bin files (that means 2,721 scans for seq 21). Does "single scan result" means that evaluating the prediction of each scans (2,721 scans for seq 21) separately, then averaging over all 2,721 scans results to get IOU result for seq 21 ? For "multiple scan result", it is mentioned in Section 4.2 that first 5 consecutive scans are combined as a sequential input. Is the result in Table 4 only evaluated for such sequential input (only first 5 consecutive scans combined)?
THX!