Closed ChrisN-987 closed 8 years ago
For the R200, there's two things that involve rectification. Which one do you want to be unrectified? And which coordinate system are you mapping to and from?
Notes:
So I am using PCL to get the pose of the depth camera. I then want to transform the RGB image so I can use the image as a texture for my model from the depth camera viewpoint.
Using the device extrinsics you know what color pixels correspond to each PointCloud Point. You want a textured mesh/PointCloud like the one presented in the examples ?
Or do you want to use all the image over the 3D model ? If it that i guess you have to create your own code...
I can create a coloured point cloud using the extrinsics as you mention. What I want to do is the same as the 3D scanning in the RealSense SDK https://software.intel.com/en-us/blogs/2015/03/24/intel-realsense-3d-scanning-scan-then-convert-obj-to-ply-for-blender-unity by laying multiple images over a model.
I can already create the model and overlay the images using PCL and it's texture mesh capabilities. In there you define a camera and give it an image and it will texture. The issue is working out what to define as the camera. I know the 3D position of the depth camera and I can get the 3D position of the RGB camera by translating. But the image still does not map correctly because of the RGB distortion. I want to undistort that RGB image offline and lay it onto my model from the position of the depth sensor so it all matches up.
The 3D scanning technique used in the RealSense SDK does several things, including smoothing the data and capturing it from multiple viewpoints. It's not just a single-view textured mesh.
because of the RGB distortion
librealsense can correct for the distortion and pull it back to a pinhole projection model, with no distortion or rotation relative to the depth sensor, by enabling RS_STREAM_RECTIFIED_COLOR
. The code for doing this is in src/Image.cpp if you want to pull RS_STREAM_COLOR
and do it yourself offline.
lay it onto my model from the position of the depth sensor
There's a few ways to do this, but I'd recommend using the color camera as the reference frame, and using RS_STREAM_RECTIFIED_COLOR
and RS_STREAM_DEPTH_ALIGNED_TO_RECTIFIED_COLOR
to get images that are from the same viewpoint, and both follow the same pinhole projection (that of the rectified color stream). Both of these transformations are done in software, and the code is in the library, so you can pull the raw streams and do the transformation offline if you wish.
I too am trying to get aligned color and depth images when i change the streams to RS_STREAM_RECTIFIED_COLOR and RS_STREAM_DEPTH_ALIGNED_TO_RECTIFIED_COLOR i get the error rs_enable_stream(device:0x7ff4fe0cf420, stream:RECTIFIED_COLOR, width:640, height:480, format:RGB8, framerate:60) crashed with: argument "stream" must be a native stream Any idea whats going on?
You should request a native stream (color/depth) but when you get_frame_data() you can request the rectified streams
@leonidk it worked!!! Thank a million!! Do you know if there is a way to rectify existing unaligned images (color and depth)?
The code path for it is in software. See /examples/cpp-tutorial-3-pointcloud.cpp for an example
Map from pixel coordinates in the depth image to pixel coordinates in the color image: As written on the below website (sorry that the site is only but in jp however i am sure u are smart enough to figure what is going on) http://unanancyowen.com/kinect-v2-coordinate-system-mapping/
and according to the official kinect 2 docs provided @ https://msdn.microsoft.com/en-us/library/microsoft.kinect.kinect.icoordinatemapper.mapcolorframetodepthspace.aspx
this seems to be the way how it is implemented in kinect case.
Now I would like to do the very same thing in realsense, or better to say librealsense. I have found couple of examples already done, and just paste here an code excert:
//////////////////////////////////////////////////////////////////////////// // Called every frame gets the data from streams and displays them using OpenCV. ///////////////////////////////////////////////////////////////////////////// bool display_next_frame() { _depth_intrin = _rs_camera.get_stream_intrinsics(rs::stream::depth); _color_intrin = _rs_camera.get_stream_intrinsics(rs::stream::color);
rs::extrinsics depth_to_color = _rs_camera.get_extrinsics(rs::stream::depth, rs::stream::color);
cv::Mat depth16(_depth_intrin.height, _depth_intrin.width, CV_16U, (uchar *)_rs_camera.get_frame_data(rs::stream::depth)); // Create depth image
cv::Mat rgb(_color_intrin.height, _color_intrin.width, CV_8UC3, (uchar *)_rs_camera.get_frame_data(rs::stream::color)); // Create color image
cv::Mat ir(_color_intrin.height, _color_intrin.width, CV_8UC1, (uchar *)_rs_camera.get_frame_data(rs::stream::infrared2)); // Creating OpenCV matrix from IR image
cv::Mat color_aligned_to_depth(_color_intrin.height, _color_intrin.width, CV_8UC3, (uchar *)_rs_camera.get_frame_data(rs::stream::color_aligned_to_depth));
cv::namedWindow("color_aligned_to_depth IR Image", cv::WINDOW_AUTOSIZE);
imshow("color_aligned_to_depth IR Image", color_aligned_to_depth);
const uint16_t * depth_image_ = (const uint16_t *)_rs_camera.get_frame_data(rs::stream::depth);
float scale = _rs_camera.get_depth_scale();
for (int dy = 0; dy < _depth_intrin.height; ++dy)
{
for (int dx = 0; dx < _depth_intrin.width; ++dx)
{
// Retrieve the 16-bit depth value and map it into a depth in meters
uint16_t depth_value = depth_image_[dy * _depth_intrin.width + dx];
float depth_in_meters = depth_value * scale;
// Skip over pixels with a depth value of zero, which is used to indicate no data
if (depth_value == 0) continue;
// Map from pixel coordinates in the depth image to pixel coordinates in the color image
rs::float2 depth_pixel = { (float)dx, (float)dy };
rs::float3 depth_point = _depth_intrin.deproject(depth_pixel, depth_in_meters);
rs::float3 color_point = depth_to_color.transform(depth_point);
rs::float2 color_pixel = _color_intrin.project(color_point);
// myfile << depth_point.x << '\t' << depth_point.y << '\t' << depth_point.z << endl;
}
}
Do you consider
// Map from pixel coordinates in the depth image to pixel coordinates in the color image rs::float2 depth_pixel = { (float)dx, (float)dy }; rs::float3 depth_point = _depth_intrin.deproject(depth_pixel, depth_in_meters); rs::float3 color_point = depth_to_color.transform(depth_point); rs::float2 color_pixel = _color_intrin.project(color_point);
based on your knowledge and experience that this is the right way to implement do this ?
@leonidk , the most recent version of librealsense does not seem to have the example /examples/cpp-tutorial-3-pointcloud.cpp . Also, where can I find the executable objects? I followed the instructions for linux from the file distribution_linux.md
I want to use RGB images to map onto a 3D model I am creating. Of course, I need to align the RGB images to the depth camera before projecting onto the model.
I understand I can use the rectified stream but I want to keep the original images and I would rather rectify at a later stage when the device is not connected (ideally). Also I would rather not save both unrectified and rectified.
I tried applying the rotation+translation from the colour to depth extrinsics on the unrectified but the mapping is off. I don't seem to be able to access the functions that do the rectification. What would be the best way to approach this?