Open gaowenjie-star opened 6 months ago
The labeled data for this is in ground_truth/dynamic/dynamic_ground_truth.txt
.
You can read more about the labeling procedure and how the labels from the drone footage can be used e.g. to auto-label point clouds from either the lidar or stereo camera in this other issue: https://github.com/mikkelkh/FieldSAFE/issues/3
Thank you for your reply, my research is in the field of object detection, I don't know much about 3d point cloud data, as far as I know, vantic is suitable for dataset production for image detection tasks, so I'll start by enquiring if there is any annotated data for object detection.
The labeled data for this is in
ground_truth/dynamic/dynamic_ground_truth.txt
. You can read more about the labeling procedure and how the labels from the drone footage can be used e.g. to auto-label point clouds from either the lidar or stereo camera in this other issue: #3
Not directly. We only have labels from drone footage, which is a top-down view of the obstacles from far above. It's possible to reproject these labels onto stereo images, but with considerable positional errors. If you are not directly interested in the multi-sensor setup and localization data, I think you should look for another dataset. There are lots of publicly available object detection datasets: https://paperswithcode.com/datasets?task=object-detection
Thanks for your great work,I would like to know what the paper mentions about the moving obstacles were manually annotated in each frame of one of the videos using the vatic video annotation tool,Where is this labelled data?