mikkelkh / FieldSAFE

Agricultural Dataset with Static and Moving Obstacles
22 stars 18 forks source link

Labelling tool #3

Open niallomahony93 opened 4 years ago

niallomahony93 commented 4 years ago

Thanks for your great work in creating this dataset and for posting details on how you created it. I am looking to create a similar dataset for my own purposes. Which tool did you use to label your dataset? I have been looking at tools like semantiic segmentation editor and semantic-kitti which can load many point clouds at once but was unsure how to best handle dynamic obstacles.

mikkelkh commented 4 years ago

We didn't actually label the point clouds themselves. Instead, we labeled a drone-recorded orthophoto of the field and used knowledge of the exact position and orientation of the tractor (and sensors) to project all the pixel-wise labels from the orthophoto into e.g. the point clouds. The procedure is described in section 4 on page 8 of our paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5713196/pdf/sensors-17-02579.pdf.

For the dynamic obstacles, we had a drone hovering above the field during the recording. And by annotating the position of all obstacles in the drone video, we used the same principle of projecting labels into sensor coordinate frames to annotate e.g. the point clouds.

For labeling the orthophoto and drone videos pixelwise, we just used a standard photo editor (I think it was GIMP for Ubuntu).

niallomahony93 commented 4 years ago

Thanks for your reply. Making a tool like that makes sense for speeding up the process. Does the drone have to have tracking ability or is its position inferred from the markers?

mikkelkh commented 4 years ago

We inferred the position from the markers in each image in the video and warped it such that it was aligned with the static orthophoto. We did use the drone's (DJI Phantom 4) ability to hover steadily above the field, ensuring that all markers were visible in the entire video. But aside from that, we didn't use any tracking ability of the drone.

One remark regarding the above procedure for labeling both static and moving obstacles for all sensors.. It can be quite error-prone, as there's an entire chain of error sources involved. This includes all transformations (sensor->vehicle->global coordinates->orthophoto) and possible synchronization errors (between images/pointclouds and navigation system, as well as between navigation system and the hovering drone). So even though it's a great way of achieving a lot of labeled data, you should also expect some (small) misalignment between actual obstacles and the pixel-wise/point-wise labels.

jmount1992 commented 3 years ago

@mikkelkh do you have pre-existing software that takes in the current position of the tractor, the labeled orthophoto and outputs labeled point clouds and forward-facing images?

mikkelkh commented 3 years ago

Unfortunately not. The software that I used for auto-labeling lidar point clouds was proprietary, so I cannot share it. Also, you will only be able to auto-label forward-facing images from the stereo camera, as you need 3D point clouds from the camera to position each pixel in the orthophoto. Therefore, pixels in the image that don't have a stereo match won't get an auto-label. You can read about my experiences with the auto-labeling approach in my thesis, page 255-269: https://pure.au.dk/portal/files/137246562/kragh_mf_thesis.pdf