IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.61k stars 4.83k forks source link

What is the difference between Tare Calibration in depth quality and C++API? #13508

Open diplomatist opened 2 days ago

diplomatist commented 2 days ago

Required Info
Camera Model D455
Firmware Version 5.16.0.1
Operating System & Version ubuntu20.04
Kernel Version (Linux Only) 5.10.160
Platform rk3588S
SDK Version 2.56.2
Language C++
Segment calibration

Issue Description

When using the run_tare_calibration C++API, I found that in the same scenario, the depth_quality program can achieve a smaller health, while the API reaches 0.39.

Example code for calibration using API:

int main(int argc, char* argv[]) {

    float ground_truth = std::atof(argv[1]);
    int calibrateState = 0;
    rs2::context ctx;
    std::atomic_bool reconnected(false);
    ctx.set_devices_changed_callback(
        [&reconnected](rs2::event_information& info) {
            LOG(INFO) << "camera device change triggered!";
            for (auto&& dev_new : info.get_new_devices()) {
                std::string new_id = dev_new.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
                LOG(INFO) << "camera device connected: " << new_id;
                reconnected.store(true);
                break;
            }
        });
    rs2::device device;
    device = ctx.query_devices().front();
    device.hardware_reset();
    auto wait_until = std::chrono::system_clock::now() + std::chrono::seconds(30);
    while (std::chrono::system_clock::now() < wait_until)
    {
        std::this_thread::sleep_for(std::chrono::milliseconds(100));
        if (reconnected.load()) {
            LOG(INFO) << "camera reset complete!";
            break;
        }
    }
    if (!reconnected) {
        LOG(ERROR) << "camera no connected";
    }
    rs2::pipeline pipe;
    rs2::config cfg;
    cfg.enable_stream(RS2_STREAM_DEPTH, 424, 240, RS2_FORMAT_Z16, 30);
    device = pipe.start(cfg).get_device();
    for (int i = 0; i < 60; i++) {
        pipe.wait_for_frames();
    }
    rs2::auto_calibrated_device cal = device.as<rs2::auto_calibrated_device>();
    float health = 10.0;
    const auto callback = [&](const float progress) -> void {LOG(INFO) << progress; };
    LOG(INFO) << "Starting tare calibration ...";
    rs2::calibration_table tare_res = cal.run_tare_calibration(ground_truth, "", &health, callback, 20000);
    LOG(INFO) << "Finished tare calibration!";
    LOG(INFO) << "Health:" << health;
    if (health < 0.25) {
        cal.set_calibration_table(tare_res);
        cal.write_calibration();
        LOG(INFO) << "write tare calibration result to device";
        calibrateState = 0;
    }
    else {
        LOG(WARNING) << "tare calibrate health = " << health << ", is not GOOD, check envs and try again";
        calibrateState = 6;
    }
    pipe.stop();
    return 0;
}

Calibration scenario: 12db4dcfdea31d68bec6a9ed4064fd4

The results I obtained: 43b77ed147a8e89fdbd21c9b18206e8

I achieved good health results using depth quality: image

I analyzed the reasons for this difference. When I call the run_tare_calibraion API on a large plane, the health is smaller, and from the depth quality tool, there are also fewer noise (holes). At this point, both API and depth quality methods can successfully perform tare caliber.

But when I only allowed the ROI area to meet the requirements of a flat surface, noise (holes) occasionally appeared. The depth quality tool was able to tare caliber normally, and the API approach resulted in a significant health benefit.

Is the API sensitive to data with noise (holes)? Has depth quality tare calibration applied some filtering to the calibrated depth data?

673bd05abc6217e047bb33ddf972e81

I am trying to modify the source code of the API, hoping to manually add some filters into it. But I can't find the depth data I want to modify.

run tare calibration source code

My usage scenario does not support me to give a larger plane, and I can only use API to perform Tare caliber. Is there any good way?

MartyG-RealSense commented 2 days ago

Hi @diplomatist The RealSense Viewer and the Depth Quality Tool have a range of image-improving post processing filters enabled by default, whilst with a program script there is no post processing applied by default as the filters have to be deliberately added to a script in order to apply them.

The images below taken in the Depth Quality Tool illustrate the difference with the filters enabled by default (upper image) and the filters all disabled (lower image). The enabled filters are the ones with a blue icon beside them.

Post-processing filters enabled

image

Post-processing filters disabled

image

diplomatist commented 2 days ago

Hi @MartyG-RealSense thank you for your prompt reply:). Actually, I turned off the Pose ProcessFilter for depth quality before conducting Tare Calibration. Is it difficult for depth quality to ignore the Pose ProcessFilter set for Tare Calibration?

I want to know if the API for TareCalibration's depth quality function actually calls auto_calibrated::run_tare_calibration or calibration_table run_tare_calibration ?

Does the logic of auto_calibrated:: run_tare_calibration seem to filter out some invalid depth holes? I guess this is the API for Tare Calibration for depth quality?

diplomatist commented 2 days ago

I want to know if the health obtained from run_tare_calibration in the C++code example I provided will be affected by holes? Will Tare Calibration for depth quality filter out invalid depth holes?

MartyG-RealSense commented 2 days ago

My knowledge of the mechanics of Tare calibration is admittedly limited, so I will refer you to the Tare section of the calibration guide at the link below and also to https://github.com/IntelRealSense/librealsense/issues/10213#issuecomment-1030589898 where one of my Intel RealSense colleagues who is a calibration expert provides advice about it.

https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras#31-running-the-tare-routine

The user guide recommends to D455 users to disable a D455 feature called thermal loop configuration and states that the Viewer and the Depth Quality tool automatically disable it. It will not be disabled in scripting though, and so it should be set to disabled.

I believe that it is likely referring to setting an instruction called RS2_OPTION_THERMAL_COMPENSATION to false. In C++ code you should be able to disable it by setting it to a value of '0' with code like this:

rs2::pipeline pipe; 
rs2::pipeline_profile selection = pipe.start();
rs2::device selected_device = selection.get_device();
auto depth_sensor = selected_device.first<rs2::depth_sensor>();
depth_sensor.set_option(RS2_OPTION_THERMAL_COMPENSATION, 0.f); 

Thermal Compensation is a D455 feature that allows the camera to automatically adjust depth values for accuracy depending upon temperature.

diplomatist commented 2 days ago

@MartyG-RealSense Thank you for your prompt reply:). I will try your suggestion within 7 hours and update the issue.

diplomatist commented 2 days ago
int main(int argc, char* argv[]) {

    float ground_truth = std::atof(argv[1]);
    int calibrateState = 0;
    rs2::context ctx;
    std::atomic_bool reconnected(false);
    ctx.set_devices_changed_callback(
        [&reconnected](rs2::event_information& info) {
            LOG(INFO) << "camera device change triggered!";
            for (auto&& dev_new : info.get_new_devices()) {
                std::string new_id = dev_new.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
                LOG(INFO) << "camera device connected: " << new_id;
                reconnected.store(true);
                break;
            }
        });
    rs2::device device;
    device = ctx.query_devices().front();
    device.hardware_reset();
    auto wait_until = std::chrono::system_clock::now() + std::chrono::seconds(30);
    while (std::chrono::system_clock::now() < wait_until)
    {
        std::this_thread::sleep_for(std::chrono::milliseconds(100));
        if (reconnected.load()) {
            LOG(INFO) << "camera reset complete!";
            break;
        }
    }
    if (!reconnected) {
        LOG(ERROR) << "camera no connected";
    }
    rs2::pipeline pipe;
    rs2::config cfg;
    cfg.enable_stream(RS2_STREAM_DEPTH, 424, 240, RS2_FORMAT_Z16, 30);
    device = pipe.start(cfg).get_device();
    auto it = device.
    auto depth_sensor = device.first<rs2::depth_sensor>();
    depth_sensor.set_option(RS2_OPTION_THERMAL_COMPENSATION, 0.f);
    for (int i = 0; i < 60; i++) {
        pipe.wait_for_frames();
    }
    rs2::auto_calibrated_device cal = device.as<rs2::auto_calibrated_device>();
    float health[2] = { -1.0f, -1.0f };
    const auto callback = [&](const float progress) -> void {LOG(INFO) << progress; };
    LOG(INFO) << "Starting tare calibration ...";
    rs2::calibration_table tare_res = cal.run_tare_calibration(ground_truth, "", health, callback, 5000);
    LOG(INFO) << "Finished tare calibration!";
    LOG(INFO) << "Health[0]= " << health[0] << "  Health[1] = " << health[1];
    if (health[1] < 0.25) {
        cal.set_calibration_table(tare_res);
        cal.write_calibration();
        LOG(INFO) << "write tare calibration result to device";
        calibrateState = 0;
    }
    else {
        LOG(WARNING) << "tare calibrate health = " << health[1] << ", is not GOOD, check envs and try again";
        calibrateState = 6;
    }
    depth_sensor.set_option(RS2_OPTION_THERMAL_COMPENSATION, 1.f);
    pipe.stop();
    return 0;
}

Morning! @MartyG-RealSense

I tried your suggestion to add the temperature compensation of D455 to my example according to the code you provided, but unfortunately it didn't work. I output my health [0] and health [1]. Among them, health [0] is very large, about 0.6; And health [1] is very small, about -0.002. According to my understanding, health [0] is the error before calibration, and health [1] is the error after calibration? After writing the calibrated parameters into the D455 device, I checked the error using depth quality and found that the calibration error was very large. The camera distance ROI was calibrated from 1385mm to 985mm.

Afterwards, I recalibrated using Windows' depth quality and it returned to normal.

It seems that there are still some tasks that are not consistent with TareCalibration for depth quality.

I have read the source code of Tare Calibration in depth qualityon-chip-calib.cpp

I noticed that the on-chip-calib device is a subdevice, and the way to turn off thermal compensation is slightly different from the device in the pipelineprfile. image

I checked the handling of devices by on chip calib and only saw set_laser_emitter_state (1.0f) and set_thermal_lop_state (0. f)

I am trying to synchronize the processing of on chip calib: image

My example program obtained a poor before health and an excellent after health, but in depth quality, the distance of ROI (ground truth=1385mm) is 1083mm image image

The effect of Tare Calibration on depth quality after calibration is as follows: image

diplomatist commented 2 days ago

I have confirmed that the Tare Calibration for depth quality is calling the calibration_table run_tare_calibration interface that I mentioned

diplomatist commented 1 day ago

@MartyG-RealSense We have 193 D455s in the deployment environment that are difficult to disassemble for calibration and can only be calibrated remotely through SSH. At present, I have tried a lot of effort to interpret the use and experimentation of the API. I have found that the default ROI area of calibration_table run_tare_calibration API seems to be a very large value, and my scenario cannot achieve such a large ROI for calibration. Compared with TareCalibration under depth quality, it can perform calibration in areas such as 20% ROI (which I have proven feasible through experiments). However, I cannot find the logic to achieve calibration in areas such as 20% ROI even when interpreting the depth quality code based on official documentation examples. May I contact Intel's experts in Tare calibration to assist us in resolving this issue.

MartyG-RealSense commented 1 day ago

I have highlighted your question to my Intel RealSense colleagues to seek their advice about your Tare question. Thanks very much for your patience!

diplomatist commented 1 day ago

I have highlighted your question to my Intel RealSense colleagues to seek their advice about your Tare question. Thanks very much for your patience!

Thank you for your prompt feedback:). I will wait for your reply online.

MartyG-RealSense commented 17 hours ago

My Tare expert colleague provided the following feedback.

"I looked over the Github ticket and I believe the core issue is a discrepancy with the tare result using DQT / Viewer and that from the user’s C++ code based on SDK. And it appears that it’s due to a difference in the ROIs used – 20% with DQT and larger (perhaps up to 80%) with their code".

diplomatist commented 17 hours ago

My Tare expert colleague provided the following feedback.

"I looked over the Github ticket and I believe the core issue is a discrepancy with the tare result using DQT / Viewer and that from the user’s C++ code based on SDK. And it appears that it’s due to a difference in the ROIs used – 20% with DQT and larger (perhaps up to 80%) with their code".

@MartyG-RealSense Yes, the C++code Tare calibration interface you mentioned is indeed based on 80% ROI; When I called the C++API by default, I tried printing the value of region of interest, which was indeed 80% of the input resolution instead of 20%; I also see a default scaling factor of resize_factor=5 in the interface of d400_outo_calib, which is also 20% ROI. Do you have any good suggestions for me to quickly achieve a 20% Tare calibration?

What method would be more convenient? Modify some parameters or add some configurations based on run_tare_calibration? As long as it can be consistent with DQT implementation, this will greatly reduce my workload (huge).

Looking forward to your answer.

MartyG-RealSense commented 15 hours ago

My understanding is that in the DQT the scaling is taken care of automatically by clicking the Get button to recalculate the ROI distance once you have put the camera at the desired distance from the observed target. As far as I am aware though there is not an equivalent command to the Get button for initiating the calculation via scripting.

I see though that in the SDK source code file d400-auto-calibration.cpp the resize factor is defined as 5. So if 5 represents 20 percent (100 / 5), perhaps 80 percent could be achieved by changing 5 to 2 and then building the SDK from the modified source code to change the default ROI scale.

https://github.com/IntelRealSense/librealsense/blob/e1688cc318457f7dd57abcdbedd3398062db3009/src/ds/d400/d400-auto-calibration.cpp#L200

diplomatist commented 15 hours ago

My understanding is that in the DQT the scaling is taken care of automatically by clicking the Get button to recalculate the ROI distance once you have put the camera at the desired distance from the observed target. As far as I am aware though there is not an equivalent command to the Get button for initiating the calculation via scripting.

I see though that in the SDK source code file d400-auto-calibration.cpp the resize factor is defined as 5. So if 5 represents 20 percent (100 / 5), perhaps 80 percent could be achieved by changing 5 to 2 and then building the SDK from the modified source code to change the default ROI scale.

https://github.com/IntelRealSense/librealsense/blob/e1688cc318457f7dd57abcdbedd3398062db3009/src/ds/d400/d400-auto-calibration.cpp#L200

Actually, what I need is calibration on a 20% ROI, and I need to implement it through the C++API, but I haven't found any relevant examples of using ds_400_calib. Can you provide some examples of using ds_400_cailib? I will conduct an experiment based on an example to determine whether the resize_factor affects the calibration area

Is there a method for Tare calibration experts to set a specified ROI size calibration through the C++API? Can you assist in inquiring with relevant experts.

As far as I know, since DQT also calls the calibrate_table run_tare_calibration interface and implements the calibration method of specifying the ROI size, I believe that experts use some way to pass the calibration ROI size in the DQT interface as a parameter to the interface for calibration. Can you help me inquire with experts about the details in this area, which will be of great help to my work.