Using stereovision we are able to find the depth of objects in front of the robot. By using the depth data with the robot's pose when taking the images, we can create a 3D map of the robot's environment. This is useful for navigation and localizing the robot relative to the objects in the environment so that we can accomplish tasks.
Task
We would like to implement 3D mapping using the depth maps that are captured from stereo vision. There is some work left to do for stereovision, mainly calibrating/testing with underwater images. Improvements to stereo vision would be helpful since the depth maps will be used for 3D mapping. While finishing up stereo vision, we can concurrently work on the first couple steps of 3D mapping. Once stereo vision is functioning for underwater imagery, we can create our own dataset for 3D mapping and test the library or algorithm we choose. Specifics steps are outlined below:
Finishing Stereo Vision
[x] Collect pairs of calibration images of checkerboard pattern, taken underwater. Also would like pairs of images of objects like buoys and gates if possible with the object's distance from camera measured for each pair of images. Would be great to record the pose of the robot and get pictures/measurements of the 3D environment for 3D mapping dataset in the future
[ ] Stereo camera calibration with collected underwater images
[ ] Generate depth maps for calibration images and for images of objects, and verify they are similar to the true depth values
[ ] Replace Q and remapping matrices located here with the ones from underwater calibration
[ ] Will likely have to update the code to account for noise in depth map from underwater images. Need to also account for blank space in gate bounding box when calculating the depth of the gate object
3D Mapping
[ ] Look for useful libraries that efficiently store probabilistic 3D maps. Can put all links here. Will need to figure out the format of the inputs to the 3D mapping algorithm to know how we might need to reformat our stereo vision results
[ ] Replicate results from dataset found online. Should collect links for potential datasets here
[ ] Create our own dataset of stereovision depth map measurements and corresponding poses for the robot. Will want photos/measurements of the environment to verify results of 3D mapping. This could ideally be done while collecting underwater images for stereo vision
[ ] Use our own dataset and 3D mapping algorithm to generate a 3D map and verify that it is reasonable
[ ] Create ROS package to integrate stereo vision with 3D mapping to create and update a 3D map as the robot moves around the environment and captures images. Our inputs will be the current pose given by the sensor fusion package and the results from stereo vision
[ ] Figure out how we should publish this information so other sub teams can use it effectively, and implement this
Background
Using stereovision we are able to find the depth of objects in front of the robot. By using the depth data with the robot's pose when taking the images, we can create a 3D map of the robot's environment. This is useful for navigation and localizing the robot relative to the objects in the environment so that we can accomplish tasks.
Task
We would like to implement 3D mapping using the depth maps that are captured from stereo vision. There is some work left to do for stereovision, mainly calibrating/testing with underwater images. Improvements to stereo vision would be helpful since the depth maps will be used for 3D mapping. While finishing up stereo vision, we can concurrently work on the first couple steps of 3D mapping. Once stereo vision is functioning for underwater imagery, we can create our own dataset for 3D mapping and test the library or algorithm we choose. Specifics steps are outlined below:
Finishing Stereo Vision
3D Mapping