IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.49k stars 4.8k forks source link

D455 camera calibration and captures feature points #12689

Open junmoxiao11 opened 6 months ago

junmoxiao11 commented 6 months ago

Required Info
Camera Model D455
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Linux (Ubuntu 20.04)
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC
SDK Version { legacy / 2.<?>.<?> }
Language python
Segment Robot

Issue Description

![Uploading a103dbb3f103ecb388efad8b398fff3.png…]() When I was shooting with the D455 camera, I noticed a lot of black depth noise on the image. Is there any way to calibrate the camera for noise reduction? I found the code to capture the feature points, but there's a lot of depth noise on the image. This severely affected the use of the algorithm. So I needed to calibrate the camera. See if the results can be a little more accurate.Is there any way I can learn how to use the D455 camera more?

junmoxiao11 commented 6 months ago
a103dbb3f103ecb388efad8b398fff3

There is a lot of black depth noise on the image.

MartyG-RealSense commented 6 months ago

Hi @junmoxiao11 Does your depth image improve if you reset the camera to its factory-new default calibration in the RealSense Viewer using the instructions at https://github.com/IntelRealSense/librealsense/issues/10182#issuecomment-1019854487 please?

junmoxiao11 commented 6 months ago

image I did everything you said, but there's still some black noise dots on the image. These black dots will affect my ability to capture the point at which the depth value of the object is changing. That would make a big difference in my observations. Do you have any other methods to reduce noise? Or dose the D455 camera inevitable produce depth noise in the image when shooting?

MartyG-RealSense commented 6 months ago

If you expand open the Post Processing section of the Viewer's side-panel and enable the Hole Filling filter (which is turned off by default) then it should fill in the holes.

junmoxiao11 commented 6 months ago

I am not sure I know what you mean. And what do yo mean fill in the hole? Dose that mean filling in the black depth noise? Do you have any other way to calibrate? I want my images to be shot without any black depth noise.

MartyG-RealSense commented 6 months ago
  1. Expand open the Stereo Module section of the Viewer side-panel by clicking on the arrow beside it.

  2. Look down the list of options until you find one called Post Processing. Click on the arrow beside it to show all the types of post-processing filter that are available.

  3. Find the filter called Hole Filling and click on the red icon beside it (which means Off) to turn it blue (On). The small black holes should then be automatically filled in.

image


You could try resetting the camera in the Viewer to its factory-new default calibration to see whether your depth image improves. Instructions for doing so can be found at https://github.com/IntelRealSense/librealsense/issues/10182#issuecomment-1019854487

junmoxiao11 commented 6 months ago

After I tried it the way you said, all the black noise on the image was gone! Thank you! But want I turned the realsense-viewer back on, the Settings were gone again and I had to readjust them. Is there a way to save my Settings for the camera? So that everytime I open the realsense-viewer, I don't have to keep adjusting it. And I have one more question. Is the camera's accurary affected by light sources and thermal radiation? Because I found that the depth value of the same point would always vary in an interval rathen than a fixed nummber.

MartyG-RealSense commented 6 months ago

Some settings, including post-processing filters, are not preserved when the Viewer is closed and reset to their default status when the Viewer is re-opened. There is not a way to permanently set these options in the Viewer unfortunately.

RealSense 400 Series cameras can perform excellently in sunlight, except when directly facing the sun. When the camera faces the sun its infrared sensors can become saturated with light, negatively affecting the depth and infrared images. If auto-exposure is enabled then the camera should auto-correct when the camera is no longer directly facing the sun.

Using a RealSense camera equipped with a light-blocking filter such as the D455f can result in an improved depth image.

https://www.intelrealsense.com/depth-camera-d455f/

The filter, the CLAREX NIR-75N, can also be purchased separately and attached externally on the outside of a RealSense camera that is not equipped with the filter such as the D455.

junmoxiao11 commented 6 months ago

So in your third step : Find the filter called Hole Filling and click on the red icon beside it (which means Off) to turn it blue (On). The small black holes should then be automatically filled in. The setting in this step cannot be saved, right?

MartyG-RealSense commented 6 months ago

No, the setting cannot be saved. An alternative method that you could try for reducing holes is to set the Laser Power option under 'Stereo Module > Controls' to its maximum value of '360'. The Laser Power value remains at its previous setting when the Viewer is opened and closed, so once set to 360 then it should still be 360 when the Viewer is next launched.

junmoxiao11 commented 5 months ago

image I set the Laser power option to 360 and there are still a lot of black spots on the image. Can I write a piece of python code to control the realsense-viewer to automatically turn on the Hole Filling Filter when it is turned on? And I found that when I took a piture with the realsense-viewer , the depth value of the same point kept changing, within about a centimeter. Is this normal?

junmoxiao11 commented 5 months ago

And I found that if I pointed the camera at a light source , like an electric lamp, the part of the lamp that was captured would appear black. Is the camera not aligned with the light source when shooting?

MartyG-RealSense commented 5 months ago

If the light source that the camera is pointed at is very strong then the camera may be unable to read depth information from the area where the light is concentrated, causing that area to appear as black (no depth) on the depth image.

RealSense camera models equipped with a light-blocking filter, such as D455f, will be better able to handle light. The filter can also be purchased separately as the CLAREX NIR-75N product and attached over the camera lenses on the outside of a non-filtered camera model such as D455.

junmoxiao11 commented 5 months ago

Thank you for your answer. Now I want to shoot a video with the D455 camera and save it. Then use the code to capture the objects in this video that have changed positions. Dose this idea of mine work?

MartyG-RealSense commented 5 months ago

If you want to analyze the data then you would likely have to record a bag file, which is like a video recording of camera data. When a script reads a bag file then it can use the data stored in the bag as though it is accessing a live camera.

junmoxiao11 commented 5 months ago

As you said, I will try to use the code to capture the dispalcement change of the moving object in the video. And I saw a video on YouTube.https://youtu.be/b-1jF9m2NSQ Do you know how this is done in the video? It might help me with my research.

MartyG-RealSense commented 5 months ago

From the video description and the date of the video, it sounds as though it is using the Unreal Engine 5 VR Template and the RealSense Unreal Engine 5 plugin.

VR template https://docs.unrealengine.com/5.0/en-US/vr-template-in-unreal-engine/

RealSense UE5 plugin https://github.com/IntelRealSense/librealsense/issues/12262

junmoxiao11 commented 5 months ago

image Is this picture taken by my D455 camera normal? And I have one more question. Why do I record a video after adjusting the realsense-viewer's controls and post-processing is still a lot of black noise in the video? Is this because post-processing can't be saved to the video either?

MartyG-RealSense commented 5 months ago

Yes, that is a normal and good quality depth image.

The black edge around your body is normal when scanning the human body with RealSense cameras.

In regard to the black area that is apparently behind your body which looks like a chair, if the chair has black colored sections then these will not be rendered on the depth image. This is because it is a general physics principle (not specific to RealSense) that dark grey or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces. The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain.

image

You could try filling in the black areas of the depth image by applying a post-processing filter with hole-filling properties

You are correct, post-processing is not saved to a bag file. Instead, the bag file and its raw camera data should be loaded in and then post-processing filters applied to the bag file's data in real-time.

junmoxiao11 commented 5 months ago

So how to post-process the raw camera data in this bag file. To remove the effect of black noise.

MartyG-RealSense commented 5 months ago

A script that uses bag file data is almost the same as a script that uses a live camera except that it contains a rs.config.enable_device_from_file instruction to tell the script to use the bag as its data source. This principle is demonstrated in the SDK's Python example read_bag_example.py.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/read_bag_example.py#L42C5-L42C38

So you would use a Python post-processing script and add the enable_device_from_file line to it to post-process bag data.

junmoxiao11 commented 5 months ago

First import library

import pyrealsense2 as rs

Import Numpy for easy array manipulation

import numpy as np

Import OpenCV for easy image rendering

import cv2

Import argparse for command-line options

import argparse

Import os.path for file path manipulation

import os.path

Create object for parsing command-line options

parser = argparse.ArgumentParser(description="Read recorded bag file and display depth stream in jet colormap.\ Remember to change the stream fps and format to match the recorded.")

Add argument which takes path to a bag file as an input

parser.add_argument("-i", "--input", type=str, default="20240301_101433.bag", help="Path to the bag file, default is '20240301_101433.bag'")

Parse the command line arguments to an object

args = parser.parse_args()

try:

Create pipeline

pipeline = rs.pipeline()

# Create a config object
config = rs.config()

# Tell config that we will use a recorded device from file to be used by the pipeline through playback.
rs.config.enable_device_from_file(config, args.input)

# Configure the pipeline to stream the depth stream
# Change this parameters according to the recorded bag file resolution
config.enable_stream(rs.stream.depth, rs.format.z16, 30)

# Start streaming from file
pipeline.start(config)

# Create opencv window to render image in
cv2.namedWindow("Depth Stream", cv2.WINDOW_AUTOSIZE)

# Create colorizer object
colorizer = rs.colorizer()

# Streaming loop
while True:
    # Get frameset of depth
    frames = pipeline.wait_for_frames()

    # Get depth frame
    depth_frame = frames.get_depth_frame()

    # Colorize depth frame to jet colormap
    depth_color_frame = colorizer.colorize(depth_frame)

    # Convert depth_frame to numpy array to render image in opencv
    depth_color_image = np.asanyarray(depth_color_frame.get_data())

    # Render image in opencv window
    cv2.imshow("Depth Stream", depth_color_image)
    key = cv2.waitKey(1)
    # if pressed escape exit program
    if key == 27:
        cv2.destroyAllWindows()
        break

finally: pass

This is the code I changed based on the example you gave me, but I got the following error when I ran the ubuntu20.04 terminal. Do you know why that is? "Traceback (most recent call last): File "bagduqu.py", line 35, in pipeline.start(config) RuntimeError: Failed to resolve request. Request to enable_device_from_file("20240301_101433.bag") was invalid, Reason: Failed to create ros reader: Error opening file: 20240301_101433.bag"

MartyG-RealSense commented 5 months ago

Is the bag file placed in the same folder as your Python script?

Is the bag file able to be played back if you drag and drop it into the center panel of the RealSense Viewer? If it does not play back then it could indicate that the bag file is incomplete or corrupted.

junmoxiao11 commented 5 months ago

The package file plays normally in the realsense-viewer. The package file dose not appear to be in the same folder as the python script. I'll try again with your advice.

junmoxiao11 commented 5 months ago

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/read_bag_example.py#L42C5-L42C38 I ran the code here, but it only seemed to open my bag file and play the video inside. Is there any way to capture the point in this video where the displacement changes?

MartyG-RealSense commented 5 months ago

Can you confirm what you mean when you say 'displacement' please?

junmoxiao11 commented 5 months ago

I mean I want to find the object in the video that has changed position. In short, I want to find the moving object and record its depth information.

MartyG-RealSense commented 5 months ago

There is not a code example available for doing this kind of position tracking with RealSense in Python, unfortunately. I researched the subject again carefully to be certain but could not find anything helpful.

If you know the color of the object in advance then it is possible to track an object by a particular color. You can find non-RealSense example projects that do this by googling for the term python track object position by color. The code in these projects may be adaptable for a pyrealsense2 script.

junmoxiao11 commented 5 months ago

Do you have examples of such specific color tracking objects? I didn't find what I was looking for on Google.

MartyG-RealSense commented 5 months ago

My apologies for the delay in replying further. I have only seen one RealSense-specific color tracking tutorial.but it was removed from the internet. However, there is a copy of it archived on the Wayback Machine internet archiving website at the link below.

https://web.archive.org/web/20201120175839/https://by-the-w3i.github.io/2019/10/06/ColorBlockTracking/

junmoxiao11 commented 5 months ago

I have an idea now. Do you have a way to split the video in the bag file into frame by frame images? I wanted to capture the point of position change by reading the depth information inside each frame and the change in coordinates between them.

MartyG-RealSense commented 5 months ago

You can save the bag file as one bag file per frame using the save_single_frameset instruction, as described at https://github.com/IntelRealSense/librealsense/issues/12761

If you have already recorded the bag then you can extract each frame as an individual image file using the RealSense SDK's rs-convert bag extraction tool or an alternative tool created by a RealSense user called rs_bag2image

rs-convert https://github.com/IntelRealSense/librealsense/tree/master/tools/convert

bag2image https://github.com/UnaNancyOwen/rs_bag2image

junmoxiao11 commented 5 months ago

Is there any specific example code that can help me convert package files into many single frame files? I want to write in python.

MartyG-RealSense commented 5 months ago

There is not a direct equivalent to these extraction tools for Python, unfortunately. You might be able to create one though by modifying a script that saves depth and color frames to PNG, like the script at https://github.com/IntelRealSense/librealsense/issues/3658

If you needed to convert a bag file then you could possibly modify the script to use a bag file as the data source instead of a live camera using the enable_device_from_file instruction like in https://github.com/IntelRealSense/librealsense/issues/9585- and the script should then read the bag file frame by frame and save the frames to PNG image.

junmoxiao11 commented 5 months ago

PNG file do not seem to contain depth information. I want to be able to read the depth information of these segmented single frame files.

MartyG-RealSense commented 5 months ago

Most of the depth information is lost when saving to a PNG file. If you instead export to a .raw file then the depth information is preserved. This subject is discussed in regard to Python at https://github.com/IntelRealSense/librealsense/issues/10553

You can obtain a .raw file from the RealSense Viewer if you use the Snapshot button on the row of icons on top of the depth stream panel.

junmoxiao11 commented 5 months ago

Or do you know of any examples of corner tracking? Now I wonder if I can use color tracking or corner tracking to capture the position of objects. And the captured point position change information is output as a file.

MartyG-RealSense commented 5 months ago

There may be research papers that discuss in text the goals that you want to achieve, but Python source code may be more useful to you. A good starting point may be therefore to look at the Python / OpenCV project at the link below that finds corners using Harris tracking.

https://github.com/sm823zw/Harris-Corner-tracking-using-LK-method

junmoxiao11 commented 5 months ago

Are there instances of color tracking or corner point tracking?

MartyG-RealSense commented 5 months ago

There are a couple of Python / OpenCV tutorials for tracking by color at the links below.

https://www.instructables.com/Object-tracking-by-color-with-Python-and-openCV/

https://davidhampgonsalves.com/opencv/python-color-tracking/

junmoxiao11 commented 5 months ago

Did you know that realsense D455 camera video captured in the realsense viewer that can be used for color tracking or corner point tracking? And can a point with color tracking read its depth information and coordinate information?

MartyG-RealSense commented 5 months ago

The RealSense Viewer tool does not have color or corner tracking. But if you used the Viewer to capture a bag file (which is like a video recording of camera data) and used that bag file in a Python script as the data source instead of a live camera then it should be possible to use the recorded color and depth streams for tracking.

If you have an XY color point then you could also obtain a depth value for it by converting the 2D color pixel to a 3D depth pixel with the RS2_PROJECT_POINT_TO_PIXEL instruction. A Python example of this instruction's use can be found at https://github.com/IntelRealSense/librealsense/issues/8239#issuecomment-767480568

junmoxiao11 commented 5 months ago

Thank you so much for your assistance. I am new to computer programming and machine vision, and I am a complete novice. So I may ask a lot of silly questions.

junmoxiao11 commented 5 months ago

Then I'll probably start with the recorded bag file instead of capturing objects in real time with realsense-viewer. But I might still need a lot of help with color tracking or feature tracking in the video. Because I don't know anything about that code. I am slowly starting to learn. Do you have any examples or tutorials that can help me get started. Thanks a lot.

MartyG-RealSense commented 5 months ago

Don't worry about asking questions. Everybody was new once!

If your project is able to make use of ROS then there is another RealSense color tracking project at the link below, in addition to the one I mentioned earlier at https://github.com/IntelRealSense/librealsense/issues/12689#issuecomment-1994137171

https://github.com/ctu-mrs/object_detect

MartyG-RealSense commented 4 months ago

Hi @junmoxiao11 Do you require further assistance with this case, please? Thanks!

junmoxiao11 commented 4 months ago

Yeah, I've been learning the corner tracking algorithm. And I'm trying to figure out a way to use corner tracking in the video from realsense camera. Do you have any examples of this?

MartyG-RealSense commented 4 months ago

https://github.com/IntelRealSense/librealsense/issues/7364#issuecomment-727635090 discusses a method for extracting corners.

A research paper at the link below refers to corner tracking with a D435i camera.

https://www.mdpi.com/2504-446X/7/1/34?type=check_update&version=3

MartyG-RealSense commented 4 months ago

Hi @junmoxiao11 Were the links in the comment above helpful to you, please? Thanks!

junmoxiao11 commented 4 months ago

The links you gave me didn't help me much. I need some examples of using realsense cameras to capture the motion of feature points. It's best to use python. For example, I recorded a video with the D455 camera and saved it in the bag file. How do I capture a moving object in this video and get information about the position of a point on the object? Including its depth information and coordinate information , which I need to turn into data and save to a file. That's what I want to do right now.