Closed banuprathap closed 3 years ago
I put together this code following the example at #2634 and I get a black image when viewing the colorized cropped image. Please help.
rs2::processing_block(
[this](rs2::frame f, rs2::frame_source& src)
{
if(not m_enableCropping.get()){
// no processing needed
src.frame_ready(f);
return;
}
// For each input frame f, do:
const int w = f.as<rs2::video_frame>().get_width();
const int h = f.as<rs2::video_frame>().get_height();
// rs2::frame --> cv::Mat
Mat image(Size(w, h), CV_16UC1, (void*)f.get_data(), Mat::AUTO_STEP);
// Create ROI for cropping
int startX = m_startRow.get();
int startY = m_startColumn.get();
int width = w - startY - m_endColumn.get();
int height = h - startX - m_endRow.get();
Mat ROI(image, Rect(startX, startY, width, height));
// Get incoming frame profile
auto p = f.get_profile().as<rs2::video_stream_profile>();
// Get incoming frame intrinsics and scale it to new frame size
auto i = p.get_intrinsics();
const float sx = static_cast<float>(width) / i.width, sy = static_cast<float>(height) / i.height;
rs2_intrinsics intr = { width, height, i.ppx*sx, i.ppy*sy, i.fx*sx, i.fy*sy, i.model,
{i.coeffs[0], i.coeffs[1], i.coeffs[2], i.coeffs[3], i.coeffs[4]} };
// create a profile for the new frame
auto ouput_profile = p.clone(p.stream_type(), p.stream_index(), p.format(),
width, height, intr);
// allocate a frame object
auto res = src.allocate_video_frame(ouput_profile, f, 0,
width, height, f.as<rs2::video_frame>().get_bytes_per_pixel() * width, RS2_EXTENSION_DEPTH_FRAME);
// copy from cv::Mat --> rs2::frame
memcpy((void*)res.get_data(), ROI.data, width * height * 2); // 2 bytes per pixel
// Send the resulting frame to the output queue
src.frame_ready(res);
});
Hi @banuprathap Before looking at cv::mat to rs2::frame conversion, I wonder if another approach to dealing with the mounting blocking the view would be to simply set a minimum depth distance for the camera using scripting, such as 0.5 meters. Close-range depth data representing objects close to the camera lenses, such as the mounting, should then be excluded from the depth image.
A custom setting that ignores close range detail could be defined by configuring a Threshold Filter post-processing filter to set a minimum and maximum depth distance.
In the script in the link below that demonstrates setting up a threshold filter in C++, use the scroll-bar at the side of the script to scroll through it.
https://stackoverflow.com/questions/59054413/intel-realsense-depth-camera-d435i-noises-and-mounds
Change the '0' value of (RS2_OPTION_MIN_DISTANCE, 0) to what you want the minimum distance in meters to be, such as '0.5'.
If you would prefer to continue with your current method of using cropping then I will be happy to discuss that further with you.
Hi @MartyG-RealSense. Unfortunately threshold filter didn't work for us. We want to be able to detect depth values within the range from other pixel areas.
Hi @banuprathap Can you provide more information please?
Thresholding is not the right approach for us. The sensor is mounted in such a way that the robot's body is within the FOV. Simply disabling depth values with thresholding would not be ideal because the depth values from regions not covered by the robot body is still useful for us.
Ideally we would create a pixel mask that would set only the pixels covered by robot body to zero. Is that possible?
Cropping out the entire rows and columns of pixels is acceptable too. Hence my initial question.
I recall a RealSense project for segmenting out from the background a particular color of a simple background surface such as a wall. A close-up of the robot body in front of the camera might produce a similar solid-colored area that could be segmented out.
Thanks for the input. I gather the suggested approach relies on RGB frame. I failed to mention we don't have access to RGB from the camera.
Whilst setting up a bounding box for XY instead of depth Z and cropping out the data outside of the bounding box can produce the results that you seek, doing so tends to be complicated. Here is an example description of such a method:
https://github.com/IntelRealSense/librealsense/issues/2016#issuecomment-403804157
A cruder but still effective approach may be to reduce depth resolution so that the sides of the depth view are restricted on the X plane, like the 'letterbox' when watching television in 4:3 format instead of 16:9 widescreen. The image below demonstrates the same scene in 848x480 (upper) and 640x480 (lower).
Hi @banuprathap Do you require further assistance with this case, please? Thanks!
Case closed due to no further comments received.
Issue Description
I want to ignore the edges of the depth frame (since it is always occluded by the mounting). As a starting point, I was able to convert the frame to cv::Mat, crop the image.
Now I'm trying to convert the Mat back to rs2::frame and I noticed a note from @dorodnic on #2634 that frame parameters have to be explicitly specified. I found some useful example in rs-depth-filter.
For sake of example, I want to crop out bottom 100 rows and right 20 columns in a 1280x720 frame. Do I have to recompute intrinsics for the new frame resolution? If yes, how do I recompute it for a 1180*700 frame?
Also, Is there a better approach to this?