IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.49k stars 4.8k forks source link

Question about intel realsense D435i camera parameters #9954

Closed wokanmanhua closed 2 years ago

wokanmanhua commented 2 years ago
Required Info
Camera Model D435i
Firmware Version 05.12.15.50
Operating System & Version Win 10
SDK Version 2.49.0
Language python
Segment others

Issue Description

In Intel realsense d435i, what is the unit of exposure for depth stream and IR stream, and what does "emitter always on" mean; For color stream, what are the units of exposure time, gamma and sharpness? If the Contrast ,Gamma ,Sharpness,Brightness and While Balance are set to the middle value, does it mean that the image remains as the original data and has not been processed? 1 What's more,why can't the third option "enable LED" of "Emitter Enabled" be selected?As shown in the figure below. 微信图片_20211111201650 Thank you.

MartyG-RealSense commented 2 years ago

Hi @wokanmanhua The depth and IR streams have a shared exposure setting, and the units of exposure are in milliseconds (ms) but can be controlled with microsecond (usec) 'granularity' - see https://github.com/IntelRealSense/librealsense/issues/6384 for further details. For example, 33000 is 33ms.

When the emitter is set to be always on, it is forced to be in an 'on' state all the time. The D415 camera model has an always-on projector but the projector on the D435 and D435i camera models pulses its light to coincide with exposure by default, so setting the emitter to be always on enables it to replicate the always-on behavior of the D415's projector.

Gamma and sharpness are simply numeric values rather than units.

As far as I am aware, the 'Enable LED' option never had a description to explain its function and is never used. Option 2, 'Auto', should also be disregarded. The most important settings are Off (0) and Enable Laser (1). Setting '0' will disable the projector and effectively also set the Laser Power option to 0.

When the projector is disabled, there may be a noticable reduction in depth image detail unless the observed scene is well lit. If the scene is well lit then the camera can alternatively use ambient light in the scene to analyze objects / surfaces for depth detail. Disabling the projector will also remove the infrared dot pattern that the projector casts onto surfaces in the scene. The dot pattern helps the camera to analyze surfaces for depth information, but may not be needed in a well lit scene.

In regard to the settings being at their middle value: typically, the default settings in the Viewer are the default values for a particular RealSense camera model that are applied by a program unless custom settings are defined and applied. It is possible to retrieve raw RGB frames directly from the camera hardware, as discussed in https://github.com/IntelRealSense/librealsense/issues/7275

wokanmanhua commented 2 years ago

Hi @wokanmanhua The depth and IR streams have a shared exposure setting, and the units of exposure are in milliseconds (ms) but can be controlled with microsecond (usec) 'granularity' - see #6384 for further details. For example, 33000 is 33ms.

When the emitter is set to be always on, it is forced to be in an 'on' state all the time. The D415 camera model has an always-on projector but the projector on the D435 and D435i camera models pulses its light to coincide with exposure by default, so setting the emitter to be always on enables it to replicate the always-on behavior of the D415's projector.

Gamma and sharpness are simply numeric values rather than units.

As far as I am aware, the 'Enable LED' option never had a description to explain its function and is never used. Option 2, 'Auto', should also be disregarded. The most important settings are Off (0) and Enable Laser (1). Setting '0' will disable the projector and effectively also set the Laser Power option to 0.

When the projector is disabled, there may be a noticable reduction in depth image detail unless the observed scene is well lit. If the scene is well lit then the camera can alternatively use ambient light in the scene to analyze objects / surfaces for depth detail. Disabling the projector will also remove the infrared dot pattern that the projector casts onto surfaces in the scene. The dot pattern helps the camera to analyze surfaces for depth information, but may not be needed in a well lit scene.

In regard to the settings being at their middle value: typically, the default settings in the Viewer are the default values for a particular RealSense camera model that are applied by a program unless custom settings are defined and applied. It is possible to retrieve raw RGB frames directly from the camera hardware, as discussed in #7275

Thank you for your reply. Here are two questions: 1、Is the exposure unit in RGB camera also microsecond (usec)? For example, 33000 is 33ms. 2、I hope that the RGB and IR images obtained from the camera will not be processed and close to the original data as much as possible, so I hope that the image processing parameters such as contract, gamma, sharpness, brightness and while balance will not take effect as much as possible. So what I want to ask is if these parameters are in the median, does it mean that these image processing methods will not work? Incidentally, the size of RGB image in raw16 format can only be 1920x1080. I don't need such a large image, so I just want to get RGB and IR images as close as possible to the original image.

MartyG-RealSense commented 2 years ago
  1. Depth and RGB exposure on the 400 Series cameras use a different scale of values. The depth exposure has large values (default 8500) and the RGB exposure has small values (default 156). A line of operational code of the RealSense SDK linked to below states RGB exposure to be in microseconds.

https://github.com/IntelRealSense/librealsense/blob/master/src/ds5/ds5-color.cpp#L197

  1. Most stream formats have undergone rectification in the camera hardware and had a distortion model applied to them. You can obtain images that have not undergone rectification by using the Y16 format. Y16 can be obtained for infrared and RGB and is unrectified as it is used for camera calibration. Y16 RGB images are monochrome though. Y16 also supports a more limited range of resolutions and FPS speeds than the usual stream formats.

If you prefer to get images as close as possible to the raw images whilst using the usual stream formats, you have the option of creating a custom configuration 'json' file (also known as a Visual Preset) that overrides the defaults and applies your own preferences by loading the file. Changes applied to the images by settings such as the ones that you listed are not likely to make a lot of difference compared to the raw original frames though in my opinion.

wokanmanhua commented 2 years ago
  1. Depth and RGB exposure on the 400 Series cameras use a different scale of values. The depth exposure has large values (default 8500) and the RGB exposure has small values (default 156). A line of operational code of the RealSense SDK linked to below states RGB exposure to be in microseconds.

https://github.com/IntelRealSense/librealsense/blob/master/src/ds5/ds5-color.cpp#L197

Therefore, when the RGB exposure value is 312, it means that the exposure time is 31200 microseconds? When the IR exposure value is 20000, it means that the exposure time is 20000 microseconds? image image

  1. Most stream formats have undergone rectification in the camera hardware and had a distortion model applied to them. You can obtain images that have not undergone rectification by using the Y16 format. Y16 can be obtained for infrared and RGB and is unrectified as it is used for camera calibration. Y16 RGB images are monochrome though. Y16 also supports a more limited range of resolutions and FPS speeds than the usual stream formats.

I tried to use Y16 format, but found that when I adjusted gamma, saturation, sharpness, brightness and while balance in the same picture, I found that they became different. Obviously, after adjusting these parameters, the image was processed, and then these processes made the image farther away from the original image. So I just want to ask when these parameter values are, the image is closest to the original image. Before 1 After 2 Also, when the IR camera uses Y16 format, the depth camera cannot be used. I need RGB, IR and depth cameras in the d435 camera

If you prefer to get images as close as possible to the raw images whilst using the usual stream formats, you have the option of creating a custom configuration 'json' file (also known as a Visual Preset) that overrides the defaults and applies your own preferences by loading the file. Changes applied to the images by settings such as the ones that you listed are not likely to make a lot of difference compared to the raw original frames though in my opinion.

1、I need image data closer to the original images, so I think these camera parameters have a great impact on the images I get. 2、In fact, I preset the camera parameters on the Python code so that the camera parameters are the same every time I run the code. What really bothers me is the adjustment of these parameter values. If these parameter values are in the middle, does it mean that the image is rarely processed and the image is closest to the original image?Thank you. image

MartyG-RealSense commented 2 years ago

There is little documentation in regard to RGB exposure units other than the references already discussed. The code link supplied states that RGB exposure is in microseconds. As the depth exposure is represented in the RealSense Viewer in microseconds (usec), I would be inclined to think - but not certain - that RGB exposure would be in the same units in order to be consistent with the units of the depth equivalent (e.g 312 usec / 0.312 ms). That is just my personal interpretation rather than an official quote though.

Likewise, there is not documentation about the extent to which settings such as gamma, sharpness, brightness and white balance affect the original raw image. RealSense applications typically operate within a High Level API. If you prefer to access streams from the sensors directly at the camera hardware's point of capture then the SDK also offers a Low Level API described in the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/api_arch.md#low-level-device-api

Here is some more information about the Low-Level API.


Low-level “sensor” callbacks API calls:

https://github.com/IntelRealSense/librealsense/tree/master/tools/data-collect

MartyG-RealSense commented 2 years ago

Hi @wokanmanhua Do you require further assistance with this case, please? Thanks!

MartyG-RealSense commented 2 years ago

Case closed due to no further comments received.