Closed jeezrick closed 5 days ago
I think the expected sync result for me is like this: ts_prev_depth -> ts_cur_imu_0 -> ts_cur_imu_1 -> ... -> ts_cur_imu_n -> ts_cur_depth -> ...
Now, it's not the case. So what should I do about it?
Hi @jeezrick It is normal for there to be an offset between IMU and RGB, as advised at the https://github.com/IntelRealSense/librealsense/issues/3205#issuecomment-460878526 case that you linked to.
In regard to IMU and depth, each IMU data packet is timestamped using the depth sensor hardware clock to allow temporal synchronization between gyro, accel and depth frames, as stated in the Core Capabilities section of Intel's guide for getting IMU data.
https://www.intelrealsense.com/how-to-getting-imu-data-from-d435i-and-t265/
In order for data to be to be timestamped with the depth sensor hardware clock though, you need to have support for hardware metadata enabled in the librealsense SDK. Hardware metadata support will be enabled if you do one of the following:
Install librealsense from Debian packages.
Build librealsense from source code with CMake with the -DFORCE_RSUSB_BACKEND=TRUE flag included in the CMake build instruction.
Build librealsense from source code with CMake and apply a kernel patch script to patch the kernel for hardware metadata support.
Thanks, I'm aware that depth and imu has some offset, imu data suppose to be ahead of depth data about 40 ms, which is not the case when I run this python code.
But after experiment, it turns out, you just need to run this code for 4 ~ 5 seconds before it reachs the above pattern. So, all fine. Also, I am interested that what does this code do? what's the effect on final data?
for (rs2::sensor sensor : sensors)
if (sensor.supports(RS2_CAMERA_INFO_NAME)) {
++index;
if (index == 1) {
sensor.set_option(RS2_OPTION_ENABLE_AUTO_EXPOSURE, 1);
// sensor.set_option(RS2_OPTION_AUTO_EXPOSURE_LIMIT,50000);
sensor.set_option(RS2_OPTION_EMITTER_ENABLED, 1); // emitter on for depth information
}
// std::cout << " " << index << " : " << sensor.get_info(RS2_CAMERA_INFO_NAME) << std::endl;
get_sensor_option(sensor);
if (index == 2){
// RGB camera
sensor.set_option(RS2_OPTION_ENABLE_AUTO_EXPOSURE,1);
// sensor.set_option(RS2_OPTION_EXPOSURE,80.f);
}
if (index == 3){
sensor.set_option(RS2_OPTION_ENABLE_MOTION_CORRECTION,0);
}
}
It looks as though if the Index number is set to 1 then auto-exposure and the camera's IR Emitter component are enabled. Auto-exposure enables the camera to automatically adjust exposure as lighting conditions in the real-world scene change, whilst the IR emitter when enabled projects an invisible pattern of dots onto the surface of objects in the scene to aid depth analysis of those surfaces.
Index 2 enables auto-exposure only.
Index 3 disables IMU motion correction, meaning that raw IMU data will be used and the SDK will not attempt to 'fix' the data to reduce inaccuracies.
Hi @jeezrick Do you require further assistance with this case, please? Thanks!
Case closed due to no further comments received.
Issue Description
Hi, I am running a python code, I create 2 pipeline to get color/depth stream and imu stream seperately. And I print the log right after I get the data. As you can see, depth/color timestamp is ahead roughly 50 ms of accel/gyro timestamp.
So what should I do syncing the data? I want to use the current image data, how can I assign the correct imu data.
I read issues/11330, issues/2188, issues/4525, issues/3205, not really sure about the solution.