IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.58k stars 4.82k forks source link

HIghly distorted point cloud surface #13419

Open addy1997 opened 3 days ago

addy1997 commented 3 days ago

Required Info
Camera Model D435
Firmware Version (Open RealSense Viewer --> 5.16.0.1)
Operating System & Version LInux(Ubuntu 20.04 LTS)
Kernel Version (Linux Only) (5.9.1-rt20)
Platform PC
SDK Version 2.55.1-0
Language C++
Segment VR/AR

Issue Description

The point cloud surface generated by the camera is highly distorted and the edges are not preserved as well. How to generate a high quality point cloud surface for remotely rendering to a VR device (Meta Quest 3)?

image

Here's my modified code for obtaining the point cloud with post processing steps. Applying hole filling filter doesn't work either. I tried using filters in other libraries such as PCL but it worsened the results.

#include <librealsense2/rs.hpp> // Include
#include "example.hpp"          // Include short list of convenience functions for rendering

#include <algorithm>         // std::min, std::max
#include <nlohmann/json.hpp> // to store pointcloud data to json format

using json = nlohmann::json;

// Helper functions
void register_glfw_callbacks(window &app, glfw_state &app_state);

int main(int argc, char *argv[])
try
{
    // Create a simple OpenGL window for rendering:
    window app(1280, 720, "RealSense Pointcloud Example");
    // Construct an object to manage view state
    glfw_state app_state;
    // register callbacks to allow manipulation of the pointcloud
    register_glfw_callbacks(app, app_state);

    // Declare pointcloud object, for calculating pointclouds and texture mappings
    rs2::pointcloud pc;
    // We want the points object to be persistent so we can display the last cloud when a frame drops
    rs2::points points;
    // Declare RealSense pipeline, encapsulating the actual device and sensors
    rs2::pipeline pipe;
    // Start streaming with default recommended configuration
    pipe.start();

    // Declare filters
    rs2::decimation_filter dec_filter; // Decimation - reduces depth frame density
    rs2::threshold_filter thr_filter;  // Threshold  - removes values outside recommended range
    rs2::spatial_filter spat_filter;   // Spatial    - edge-preserving spatial smoothing
    rs2::temporal_filter temp_filter;  // Temporal   - reduces temporal noise
                                       // Declare disparity transform from depth to disparity and vice versa
    rs2::hole_filling_filter hfill_filter; 

    rs2::disparity_transform depth_to_disparity(true);
    rs2::disparity_transform disparity_to_depth(false);

    while (app) // Application still alive?
    {
        // Wait for the next set of frames from the camera
        auto frames = pipe.wait_for_frames();

        auto color = frames.get_color_frame();

        // For cameras that don't have RGB sensor, we'll map the pointcloud to infrared instead of color
        if (!color)
            color = frames.get_infrared_frame();

        // Tell pointcloud object to map to this color frame
        pc.map_to(color);

        auto depth = frames.get_depth_frame();

        // Generate the pointcloud and texture mappings
        points = pc.calculate(depth);

        dec_filter.process(depth);
        thr_filter.process(depth);
        spat_filter.process(depth);
        temp_filter.process(depth);
        hfill_filter.process(depth);
        disparity_to_depth.process(depth);
        depth_to_disparity.process(depth);

        // Get point cloud data as a 2D array of floats
        const rs2::vertex *vertices = points.get_vertices();
        size_t point_count = points.size();

        // Prepare vector to store point cloud data (x, y, z)
        std::vector<std::vector<float>> point_cloud_data;

        for (size_t i = 0; i < point_count; ++i)
        {
            // Extract x, y, z from each vertex
            float x = vertices[i].x;
            float y = vertices[i].y;
            float z = vertices[i].z;

            // Store point in vector
            point_cloud_data.push_back({x, y, z});
        }

        // Upload the color frame to OpenGL
        app_state.tex.upload(color);

        // Draw the pointcloud
        draw_pointcloud(app.width(), app.height(), app_state, points);

        // Send the point cloud data to Azure (optional, commented out)
        // send_point_cloud_data(point_cloud_data);
    }

    return EXIT_SUCCESS;
}
catch (const rs2::error &e)
{
    std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
    return EXIT_FAILURE;
}
catch (const std::exception &e)
{
    std::cerr << e.what() << std::endl;
    return EXIT_FAILURE;
}
addy1997 commented 2 days ago

Hey @MartyG-RealSense,

could you suggest something?

MartyG-RealSense commented 2 days ago

Hi @addy1997 There is a specific order that Intel recommend applying the post-processing filters in. I have rearranged your list below into the Intel-recommended order.

dec_filter.process(depth); depth_to_disparity.process(depth); spat_filter.process(depth); temp_filter.process(depth); disparity_to_depth.process(depth); hfill_filter.process(depth); thr_filter.process(depth);

https://dev.intelrealsense.com/docs/post-processing-filters#using-filters-in-application-code

In regard to rendering to a Quest 3, you may find the detailed guide for Quest 2 at the link below to be helpful.

https://github.com/GeorgeAdamon/quest-realsense