SSLAD2021 / AutoScenes-eval

2 stars 0 forks source link

To Committee: here are some issues about the AutoScenes dataset #1

Open terrencewayne opened 2 years ago

terrencewayne commented 2 years ago

Hi all, we have gone through the AutoScenes dataset and met problems below.

Q1: Unified Model

Is it permitted to utilize individual model for object detection and road segmentation?

Q2: Inaccurate Box annotations

We found that some inaccurate 3D bounding boxes when projecting them to images. For example scene-plus-0751. Generally inaccurate extrinsic or asynchronous sensor timestamps may cause the problem. Is that negligible for training? image

And there are also some strange rotating boxes. For example scene-plus-0734. image

Q3: Invalid Segmentation Polygons

Following the instructions, we use nuScenes devkit to extract the bev segmentation mask. However, we found that only 10 scenes (scene-plus-0670, scene-plus-0676, scene-plus-0696, scene-plus-0705, scene-plus-0729, scene-plus-0742, scene-plus-0751, scene-plus-0826, scene-plus-0844, scene-plus-0874) out of 45 trainval scenes have the correct bev mask as paste below. Somehow we extract blank mask for all segmentation classes from the rest of the trainval scenes. image

Q4: Missing Ped-Crossing

It seems that no "ped-crossing" is annotated in trainval scenes, which conflicts with what challenge claims.

Q5: Unknown Keywords

We have not figured out the meaning of "relative translation" and "relative rotation" in ego_pose.json and "relative_velocity" in sample_annotation.json

SSLAD2021 commented 2 years ago

Q1: No, as this track is called multi-task learning, results of object detection and segmentation should come from a single model. Q2: There are lidar overlaps to the beginning and ending of one lidar frame, which may confuse the annotations. You can identify these overlaps and remove the bad ones. Q3: Yes, you are right. After careful examination we found collision takes place to log tokens and multiple scenes point to the same map. We fix this and update the annotation download link at sslad2022 Q4: Only test dataset contains 'ped-crossing' labels. We decide to remove this from both validation and test set evaluations. You can find out the changes in the new result file demo. Q5: AutoScenes dataset have no 'multi-sweep' data so velocity is hard to learn from training data and that's why we remove mAVE from the final evaluation metrics. So you can just ignore these keywords.

terrencewayne commented 2 years ago

Thanks a lot, we have just tried the new annotations and the bev mask looks correct

terrencewayne commented 2 years ago

In terms of lidar overlaps, how to divide the bad annotations? What else, just to seek confirmation, it seems that "traffic_cone" has annotations but is not involved in evaluation

nmzfrank commented 2 years ago

For overlap lidar annotations, you can remove the 'bad' one according to its yaw angle which is always parallel with self car. 'traffic_cone' is not included, we want to keep the object detection setting close to ONCE datast.

terrencewayne commented 2 years ago

Hello, we have met some other problems in terms of annotations. Please have a look.

Q6: Opposite Orientation

We found that some cars' orientation in the opposite lane is wierd. For instance scene 0722, as pasted below, has cars from both self lane and opposite lane sharing the same head orientation. However, cars in opposite lane shoud generally have opposite orientation. The phenomenon causes a large AOE in evaluation. image

Q7: Missing object annotation

We found some objects are missed in annotations. For instance scene 0751 as pasted below. The missing annotations cause many FPs in evaluation. Would you please refine the annotations with human verification? image

SSLAD2021 commented 2 years ago

Thanks for your reply. Currently we have no plan updaing the annotations which may take some time. Considering the time schedule, it is not wise to update the annotation after half time of the challenge. We will update the dataset next challenge but not this year.

terrencewayne commented 2 years ago

Thanks for your reply. Currently we have no plan updaing the annotations which may take some time. Considering the time schedule, it is not wise to update the annotation after half time of the challenge. We will update the dataset next challenge but not this year.

We agree that refining annotation is time-exhausted. And we wonder if the above probelms happen in test set. If so, it is better to know which scene has the same orientation in opposite lane as that in self lane. Is it possible to provide this scene list when test data is released?

nmzfrank commented 2 years ago

We can not guarantee all annotations are correct in the test set. Annotations have correct and wrong orientation may appear in the same lane, which makes it difficult to tell which one is oppisite. We will consider checking the annotations in the test set again and update the ground truth on the eval server rather than providing the scene list. Nevertheless, as long as the evaluation code and ground truth is consistant among all submissions, this may not be a big problem.