Closed Imel23 closed 4 years ago
@95ImadEL , at what point does this code generates the exception?
Also note that filtering aligned depth frames is not recommended. You should reverse the order and apply the post-processing filters before alignment - otherwise the alignment error margin aggregated via the calibration tolerances and data discretization will be be extrapolated by the filters.
@ev-mp thank you for the reply, in fact it generates the exception at
depth = depth2disparity.process(depth);
, I think that depth is null but i don't understand why, thank you
@ev-mp if you want i can provide the whole code.
Just a wild guest - depth2disparity is applicable in case of stereo-based devices only. And in case of software device the mockup sensor should be properly configured to mimic all the attributes of stereo depth camera : intrinsic/extrinsic/base line/etc. It is possible that one of the attributes is not populated properly. Can you post the device/sensors initialization and configuration segment? What was the type of camera that you used to generated the input PNG files ? Also - did you try to apply the filters before alignment, and of so - what were the results?
In fact i applied the filters before alignment and i have the same problem. PNG files were recorded using D435, this is the full code :
// License: Apache 2.0. See LICENSE file in root directory.
// Copyright(c) 2019 Intel Corporation. All Rights Reserved.
#include <librealsense2/rs.hpp>
#include <librealsense2/hpp/rs_internal.hpp>
#include <rs_pipeline.hpp>
#include <rs_device.hpp>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <chrono>
#include "api_how_to.h"
#include "tools.h"
#include "types.h"
#define DEPTH "/media/imad/0EC2ECF4C2ECE0C7/RGB-D/Dataset_CamUbloxNavi/Depth/"
#define RGB "/media/imad/0EC2ECF4C2ECE0C7/RGB-D/Dataset_CamUbloxNavi/RGB/"
#define PATH_TO_SAVE_ALIGNED_DEPTH "/media/imad/0EC2ECF4C2ECE0C7/software_device/depth/"
#define PATH_TO_SAVE_ALIGNED_RGB "/media/imad/0EC2ECF4C2ECE0C7/software_device/RGB/"
#define NUMBER_OF_FRAME 14730
using namespace std;
using namespace cv;
using namespace rs2;
const int W = 640;
const int H = 480;
const int BPP_D = 2;
struct intrinsics depth_intrin = {
// Depth intrinsics
depth_intrin.width = 640,
depth_intrin.height = 480,
depth_intrin.ppx = 315.914,
depth_intrin.ppy = 242.749,
depth_intrin.fx = 384.775,
depth_intrin.fy = 384.775,
depth_intrin.model = DISTORTION_BROWN_CONRADY,
depth_intrin.coeffs[0] = 0,
depth_intrin.coeffs[1] = 0,
depth_intrin.coeffs[2] = 0,
depth_intrin.coeffs[3] = 0,
depth_intrin.coeffs[4] = 0,
};
struct intrinsics color_intrin = {
// Color intrinsics
color_intrin.width = 640,
color_intrin.height = 480,
color_intrin.ppx = 317.973,
color_intrin.ppy = 246.758,
color_intrin.fx = 618.547,
color_intrin.fy = 618.56,
color_intrin.model = DISTORTION_INVERSE_BROWN_CONRADY,
color_intrin.coeffs[0] = 0,
color_intrin.coeffs[1] = 0,
color_intrin.coeffs[2] = 0,
color_intrin.coeffs[3] = 0,
color_intrin.coeffs[4] = 0,
};
struct extrinsics depth_to_color = { // depth to color extrinsics
/**< Column-major 3x3 rotation matrix */
depth_to_color.rotation[0] = 0.99997,
depth_to_color.rotation[1] = -0.00643702,
depth_to_color.rotation[2] = -0.00422144,
depth_to_color.rotation[3] = 0.00643976,
depth_to_color.rotation[4] = 0.999979,
depth_to_color.rotation[5] = 0.000635677,
depth_to_color.rotation[6] = 0.00421726,
depth_to_color.rotation[7] = 0.00331597,
depth_to_color.rotation[8] = 0.999991,
/**< Three-element translation vector, in meters */
depth_to_color.translation[0] = 0.0147439,
depth_to_color.translation[1] = -0.000662843,
depth_to_color.translation[2] = 0.999991,
};
Mat image_rgb;
Mat image_depth(depth_intrin.height,depth_intrin.width,CV_16U) ;
Mat aligned_depth(color_intrin.height,color_intrin.width,CV_16U);
int main(int argc, char * argv[])
{
rs2_intrinsics depth_intrinsics{ depth_intrin.width, depth_intrin.height, depth_intrin.ppx, depth_intrin.ppy, depth_intrin.fx , depth_intrin.fy , RS2_DISTORTION_BROWN_CONRADY,{ 0,0,0,0,0 } };
rs2_intrinsics color_intrinsics = { color_intrin.width, color_intrin.height, color_intrin.ppx, color_intrin.ppy, color_intrin.fx, color_intrin.fy, RS2_DISTORTION_INVERSE_BROWN_CONRADY ,{ 0,0,0,0,0 } };
//Post_processing-------------------------------------------------------------
// Define transformations from and to Disparity domain
rs2::disparity_transform depth2disparity;
rs2::disparity_transform disparity2depth(false);
//Spatial filter
// Define spatial filter (edge-preserving)
rs2::spatial_filter spat;
spat.set_option(RS2_OPTION_HOLES_FILL, 5); // 5 = fill all the zero pixels
// Define temporal filter
rs2::temporal_filter temp;
// Define align objects. One will be used to align to color.
// Creating align object is an expensive operation that should not be performed in the main loop
rs2::align align_to_color(RS2_STREAM_COLOR);
//Create SW device------------------------------------------------------------
rs2::software_device dev; // Create software-only device
// Define two sensors, one for depth and one for color streams
auto depth_sensor = dev.add_sensor("Depth");
auto color_sensor = dev.add_sensor("Color");
//Depth-----------------------------------------------------------------------
auto depth_stream = depth_sensor.add_video_stream({ RS2_STREAM_DEPTH, 0, 0, W, H, 30, BPP_D, RS2_FORMAT_Z16, depth_intrinsics });
depth_sensor.add_read_only_option(RS2_OPTION_DEPTH_UNITS, 0.001f);
//Color-----------------------------------------------------------------------
auto color_stream = color_sensor.add_video_stream({ RS2_STREAM_COLOR, 0, 1, W, H, 30, 1, RS2_FORMAT_RGB8, color_intrinsics });
// synchronization model from the D435 camera
dev.create_matcher(RS2_MATCHER_DLR_C);
rs2::syncer sync;
depth_sensor.open(depth_stream);
color_sensor.open(color_stream);
depth_sensor.start(sync);
color_sensor.start(sync);
depth_stream.register_extrinsics_to(color_stream, { { depth_to_color.rotation[0],depth_to_color.rotation[1],depth_to_color.rotation[2],depth_to_color.rotation[3]
,depth_to_color.rotation[4],depth_to_color.rotation[5],depth_to_color.rotation[6],depth_to_color.rotation[7],depth_to_color.rotation[8] },{ depth_to_color.translation[0],
depth_to_color.translation[1],depth_to_color.translation[2] } });
int frame_number = 0;
int i = 0;
while(frame_number<NUMBER_OF_FRAME){
printf("%d\n", frame_number);
std::stringstream n_algn_depth_png_file, n_algn_rgb_png_file;
n_algn_depth_png_file << DEPTH << "frame"<< frame_number <<".png";
n_algn_rgb_png_file << RGB << "frame"<<frame_number <<".png";
image_depth = imread(n_algn_depth_png_file.str().c_str(), -1);
image_rgb = imread(n_algn_rgb_png_file.str().c_str(), -1); ;
std::vector<uint16_t> pixels_depth(640 * 480 * 2, 0);
memcpy(pixels_depth.data(), image_depth.data, 640*480*2);
std::vector<uint8_t> pixels_color(640 * 480 * 1, 0);
memcpy(pixels_color.data(), image_rgb.data, 640*480*1);
//we created the depth frame we inject it into the depth sensor
depth_sensor.on_video_frame({ pixels_depth.data(), // Frame pixels from capture API
[](void*) {}, // Custom deleter (if required)
W*BPP_D, BPP_D, // Stride and Bytes-per-pixel
(rs2_time_t)frame_number * 16, RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, frame_number, // Timestamp, Frame# for potential sync services
depth_stream });
//we created the color frame we inject it into the color sensor
color_sensor.on_video_frame({ pixels_color.data(), // Frame pixels from capture API
[](void*) {}, // Custom deleter (if required)
W*1, 1, // Stride and Bytes-per-pixel
(rs2_time_t)frame_number * 16, RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, frame_number, // Timestamp, Frame# for potential sync services
color_stream });
++frame_number;
//wait for synchronized pairs from the syncer
rs2::frameset fset = sync.wait_for_frames();
rs2::frame depth = fset.first_or_default(RS2_STREAM_DEPTH);
rs2::frame color = fset.first_or_default(RS2_STREAM_COLOR);
// if(depth && color) {
//spatial filter
// To make sure far-away objects are filtered proportionally
// we try to switch to disparity domain
printf("done! \n");
depth = depth2disparity.process(depth);
// Apply spatial filtering
depth = spat.process(depth);
// Apply temporal filtering
depth = temp.process(depth);
// If we are in disparity domain, switch back to depth
depth = disparity2depth.process(depth);
// Align all frames to color viewport
fset = align_to_color.process(fset);
// Get the depth frame's dimensions
float width = 640;//depth.get_width();
float height = 480;//depth.get_height();
// Write images to disk
std::stringstream depth_png_file, rgb_png_file;
depth_png_file << PATH_TO_SAVE_ALIGNED_DEPTH << i <<".png";
rgb_png_file << PATH_TO_SAVE_ALIGNED_RGB << i <<".png";
std::vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(0); // lossless, the number is 0-9, the most data will be lost at 9 (lightest), none at 0 (heaviest)
// Create mat
Mat Dimg(Size(width, height), CV_16UC1, (void*)depth.get_data(), Mat::AUTO_STEP);
imwrite(depth_png_file.str().c_str(), Dimg, compression_params); // save filename as date
Mat Rgbimg(Size(width, height), CV_8UC3, (void*)color.get_data(), Mat::AUTO_STEP);
imwrite(rgb_png_file.str().c_str(), Rgbimg, compression_params); // save filename as date
i++;
//}
}
return EXIT_SUCCESS;
}
I guess i`ve got the same problem: after applying depth_to_disparity filter programm get exception when i'm trying to get intrinsics this way:
frames = frames.apply_filter(dec_filter).apply_filter(depth_to_disparity).apply_filter(spat_filter).apply_filter(temp_filter).apply_filter(hole_filter);
auto frames_aligned = align_to_depth.process(frames);
auto inrist = rs2::video_stream_profile(frames_aligned.get_depth_frame().get_profile()).get_intrinsics();
@BadMachine - you're missing the reverse transformation at the end of the filters chain : disparity2depth
without which the filtered depth remains in disparity domain.
So when you call
rs2::video_stream_profile(frames_aligned.get_depth_frame().get_profile()).get_intrinsics();
the highlighted part is probably results in nullptr that later is being dereferenced and probably results in segfault.
@ev-mp you were right. Thanks again!
@ev-mp what do you think about my code ?
@95ImadEL , I think it has more than 200 lines of code and some 8k+ characters ... :)
I think it will take some time to assess what is going on, but one thing that I already recognize , which is also hinted by the location of the exception, is that in order to use disparity<=>depth
you must define RS2_OPTION_STEREO_BASELINE
option.
See the following reference code that we use to test this functionality in unit-tests : https://github.com/IntelRealSense/librealsense/blob/development/unit-tests/unit-tests-post-processing.cpp#L221-L241.
You can use the above link to run it step by step to verify that all the required options/attributes are properly initialized/populated
@ev-mp thank you so much ^^
@ev-mp I notice that when I comment :
color_sensor.on_video_frame({ pixels_color.data(), [](void*) {}, W*BPP_C, BPP_C, (rs2_time_t)frame_number+i, RS2_TIMESTAMP_DOMAIN_SYSTEM_TIME, frame_number, color_stream });
or
depth_sensor.on_video_frame({ pixels_depth.data(), [](void*) {}, W*BPP_D, BPP_D, (rs2_time_t)frame_number+i, RS2_TIMESTAMP_DOMAIN_SYSTEM_TIME, frame_number, depth_stream });
It works for depth or color, I don't understand why on_video_frame can't handle both ?
@95ImadEL , I think you should replace the (rs2_time_t)frame_number+i
parameter in the above code with deltas that are consistent with the 30 FPS declared in the code:
auto depth_stream = depth_sensor.add_video_stream({ RS2_STREAM_DEPTH, 0, 0, W, H, 30, BPP_D, RS2_FORMAT_Z16, depth_intrinsics }); //Color----------------------------------------------------------------------- auto color_stream = color_sensor.add_video_stream({ RS2_STREAM_COLOR, 0, 1, W, H, 30, 1, RS2_FORMAT_RGB8, color_intrinsics });
Using arbitrary intervals may not work as the syncer is not designed to rectify it for the user (GIGO).
@ev-mp so how can i choose the right value for the timestamp ?
I have the same issue as @95ImadEL . I'm trying to repopulate depth and color frames using rs2::software_device but when creating the frameset only one comes through (code is very similar to above). Running multiple times it appears that sometimes the non-null frame would be a color or depth but never both, so I'm assuming it has to do with a timing issue. @ev-mp your comment on setting the rs2_time_t is vague and any documentation for this is not there so would you be able to explain this part in more details so we can get this to work? Thank you.
@patrickabadi I think you should extract timestamps for each frame for both depth and color, then use it in both functions :
color_sensor.on_video_frame({ pixels_color.data(), [](void*) {}, W*BPP_C, BPP_C, (rs2_time_t)timestamp_colorframe_i_read_it_from_timestamp_colorfile_txt_in_millisecond, RS2_TIMESTAMP_DOMAIN_SYSTEM_TIME, frame_number, color_stream });
or
depth_sensor.on_video_frame({ pixels_depth.data(), [](void*) {}, W*BPP_D, BPP_D, (rs2_time_t)timestamp_depthframe_i_read_it_from_timestamp_depthfile_txt_in_millisecond, RS2_TIMESTAMP_DOMAIN_SYSTEM_TIME, frame_number, depth_stream });
It works for me, thank you @ev-mp for the remark
@patrickabadi hello,
The timestamps injected into on_video_frame
should be consistent with the FPS rate declared:
When Depth stream is 30FPS then the timestamps field for frame index i in millisec (doc) should be advanced according to
Tstart+33.333*i
@95ImadEL , thank you for confirmation, do you still have issues unresolved with the above flow?
@ev-mp thank you so much, its ok :+1:
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
<Describe your issue / question / feature request / etc..>
Hello,
Can i use the on_video_frame function like that ? `int frame_number = 0; int i = 0;
while executing this code on a RGB-D png dataset, i got this error :
terminate called after throwing an instance of 'rs2::error' what(): null pointer passed for argument "frame" Aborted (core dumped)
Thanks in advance.