Segmentation is a useful tool to perceive the environment and extracts a very rich representation. In
combination with Depth Perception, this can lead to a very accurate snapshot of the track.
Use real-world datasets like Berkeley Deep Drive, KITTIor CityScapes (however, these might not directly be transferable to our domain -> Domain Adaptation)
Use our environment generator (originally for Gazebo). A possibility is to render the track in 3D software like Blender to create a more realistic image. Segmentation ground truth can be generated by an additional render pass.
Constraints:
The network has to run at least at 10 FPS on our on-board hardware (A Jetson TX2 )
This also includes communication with the ROS core running on the main Intel NUC board
Benchmarks
IoU and other scores on common semantic segmentation datasets
Empiric evaluation in the real world
Tasks:
[ ] Checkout state of the art segmentation algorithms for mobile robots.
[ ] Define useful classes for the segmentation and compare it with different datasets (CityScapes, BDD, Kitti).
[ ] Train a model with a public dataset or our data and test performance on the Jetson
Objective: Segmentation of the Camera Image
Segmentation is a useful tool to perceive the environment and extracts a very rich representation. In combination with Depth Perception, this can lead to a very accurate snapshot of the track.
Classes:
Network architecture
Data sources:
Constraints:
Benchmarks
Tasks: