IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.53k stars 4.81k forks source link

Multiple D435i cameras are synchronized in depth, can the master enable imu without damaging the synchronization function #12072

Closed chenxiaocongAI closed 12 months ago

chenxiaocongAI commented 1 year ago

Required Info
Camera Model {D400 }
Firmware Version (Open RealSense Viewer -->05.14.00.00)
Operating System & Version { Linux (Ubuntu 20)
Kernel Version (Linux Only) (5.15.0-46-generic)
Platform PC
SDK Version { legacy / 2.51.1-0}
Language {C/C++ }
Segment {Robot }

Issue Description

<Describe your issue / question / feature request / etc..>

I have five D435I cameras and have achieved depth synchronization between them. The implementation mainly involves using one as the master and the other as the slave. But when I enable the IMU function on the master, the depth synchronization of the five cameras mentioned above cannot be synchronized, and the master's frametime will deviate from the other four cameras. My question is: Can you enable the IMU function of the master while synchronizing five depth cameras

chenxiaocongAI commented 1 year ago

@MartyG-RealSense

MartyG-RealSense commented 1 year ago

Hi @chenxiaocongAI The IMU cannot be hardware-synced with a master trigger. However, each individual camera's IMU data packet is timestamped using the depth sensor hardware clock on that particular camera to allow temporal synchronization between gyro, accel and depth frames.

chenxiaocongAI commented 1 year ago

Hi @chenxiaocongAI The IMU cannot be hardware-synced with a master trigger. However, each individual camera's IMU data packet is timestamped using the depth sensor hardware clock on that particular camera to allow temporal synchronization between gyro, accel and depth frames.

If it's just a single camera, I think your statement is fine, but what I need to do now is to enable the imu function when synchronizing the depth of 5 cameras. After successfully synchronizing the depth of 5 cameras, I enabled the Imu function on the slave camera, but this will affect the synchronization of the depth of 5 cameras. Have you tested the depth synchronization of 5 cameras and enabled the imu function of any camera at the same time? If tested, can I show you the demo

MartyG-RealSense commented 1 year ago

Depth should be the only stream that is being hardware-synced to the master. However, a RealSense user at https://github.com/IntelRealSense/realsense-ros/issues/2648 found that the infrared stream was also apparently being affected. So there is a precedent for a stream type that is not directly being synced to a master being affected by the sync process.

RealSense users usually do not use IMU in multiple camera hardware sync setups, so I do not recall a previous case where another user has experienced the same issue as you and I do not know of an official Intel test that has done so. I do not have the equipment at my location to replicate your test myself.

chenxiaocongAI commented 1 year ago

Depth should be the only stream that is being hardware-synced to the master. However, a RealSense user at IntelRealSense/realsense-ros#2648 found that the infrared stream was also apparently being affected. So there is a precedent for a stream type that is not directly being synced to a master being affected by the sync process.

RealSense users usually do not use IMU in multiple camera hardware sync setups, so I do not recall a previous case where another user has experienced the same issue as you and I do not know of an official Intel test that has done so. I do not have the equipment at my location to replicate your test myself.

There is a problem that I have never understood, since IMU also uses the timestamp of the depth sensor. Why am I unable to synchronize depth sensors with other cameras when I turn on IMU on one camera

MartyG-RealSense commented 1 year ago

The hardware sync system was designed for syncing depth between multiple cameras and has never supported use of IMU streams during sync, unfortunately. Hardware sync is not always necessary for a project though and satisfactory results may stll be able to be achieved with a multiple set of unsynced cameras.

MartyG-RealSense commented 1 year ago

Hi @chenxiaocongAI Do you require further assistance with this case, please? Thanks!

chenxiaocongAI commented 1 year ago

Which camera has depth and color sensors on the same motherboard

发自我的iPhone

------------------ Original ------------------ From: MartyG-RealSense @.> Date: Sun,Aug 13,2023 3:02 PM To: IntelRealSense/librealsense @.> Cc: chenxiaocongAI @.>, Mention @.> Subject: Re: [IntelRealSense/librealsense] Multiple D435i cameras aresynchronized in depth, can the master enable imu without damaging thesynchronization function (Issue #12072)

Hi @chenxiaocongAI Do you require further assistance with this case, please? Thanks!

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

MartyG-RealSense commented 1 year ago

The D415 and D455 camera models have RGB and depth sensors on the same board. The images below show the circuit boards of D415 and D455 with the RGB sensor integrated on them.

D415

image

D455

image

chenxiaocongAI commented 1 year ago

I successfully synchronized depth using 5 cameras (including d435i and d455), but when I turned on the color sensor on the master camera, I found that the arrival timestamp of the master camera sometimes differs by about 30ms from the depth data received by my program. May I ask if anyone has encountered a problem like mine and has addressed it

发自我的iPhone

------------------ Original ------------------ From: MartyG-RealSense @.> Date: Sun,Aug 13,2023 10:01 PM To: IntelRealSense/librealsense @.> Cc: chenxiaocongAI @.>, Mention @.> Subject: Re: [IntelRealSense/librealsense] Multiple D435i cameras aresynchronized in depth, can the master enable imu without damaging thesynchronization function (Issue #12072)

The D415 and D455 camera models have RGB and depth sensors on the same board. The images below show the circuit boards of D415 and D455 with the RGB sensor integrated on them.

D415

D455

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

chenxiaocongAI commented 1 year ago

When using a single camera and turning on the depth and color sensors, I found that the arrival time of the depth sensor sometimes lags behind the PC time when I received the callback by 30ms. When I receive data on the PC, I record the current time and then obtain the time of the depth sensor. When I compare the two, I sometimes find that it lags by 30ms. Why is this happening@MartyG-RealSense

MartyG-RealSense commented 1 year ago

When both depth and RGB are enabled, there will be a temporal offset of one frame, as advised by a RealSense team member at https://github.com/IntelRealSense/librealsense/issues/1548#issuecomment-389070550

If the wait_for_frames() instruction is used in a script then the RealSense SDK should automatically try to find the best timestamp match between depth and RGB.

The SDK should also attempt to automatically sync the two streams when they have the same FPS. A way to help match the FPS is to disable an RGB option called Auto-Exposure Priority. If Auto-Exposure is enabled and Auto-Exposure Priority is disabled then the SDK will attempt to enforce a constant FPS rate for both streams.

chenxiaocongAI commented 1 year ago

@MartyG-RealSense I'm not asking about the time difference between the depth sensor and the color sensor in the same frame. What I want to ask is, when a PC receives depth sensor data, it records the current PC time and subtracts the arrival time of the depth sensor from the PC time, resulting in a delay of over 30ms.

MartyG-RealSense commented 1 year ago

Timestamps are a complex subject, with different types of timestamp generating their time at different points (such as at the beginning of data transmission to USB, or at the beginning of exposure). Some are generated by the camera hardware's firmware driver whilst others are derived from the computer's system clock.

At https://github.com/IntelRealSense/librealsense/issues/2188#issuecomment-409985803 a RealSense team member explains the differences in operation between timestamp types such as frame_timestamp and time_of_arrival.

I feel that if you want the depth data arrival time to match up with the system clock of the computer then retrieving the time_of_arrival timestamp is going to work best for you so that it can take account of the time required for a frame to be captured, processed and passed to the user interface.

Bear in mind that if a hardware sync 'master and slave' multicam configuration is being used then the slave cameras will try to replicate the timestamp timing of the master camera.

Also, it is expected that over time the slave cameras' timestamps will begin to deviate from the master camera's over a long period of tens of minutes. When this occurs, it shows that hardware sync is actually working correctly. This principle is explained in the section of the hardware sync white-paper document linked to below, under the heading Now to the somewhat counter intuitive aspect of time stamps.

https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration#3-multi-camera-programming

chenxiaocongAI commented 1 year ago

https://github.com/IntelRealSense/librealsense/issues/8419 First, the RGB and depth are synced in one cameras. Second, the depth can be synced among multi cameras. Yes. You are absolutely right! I will escalate this feature to dev team. @RealSenseSupport

Has this feature been implemented? Can this function be used on multiple types of devices

chenxiaocongAI commented 1 year ago

Is there an example of multiple camera soft synchronization

发自我的iPhone

------------------ Original ------------------ From: MartyG-RealSense @.> Date: Tue,Aug 15,2023 3:17 PM To: IntelRealSense/librealsense @.> Cc: chenxiaocongAI @.>, Mention @.> Subject: Re: [IntelRealSense/librealsense] Multiple D435i cameras aresynchronized in depth, can the master enable imu without damaging thesynchronization function (Issue #12072)

Timestamps are a complex subject, with different types of timestamp generating their time at different points (such as at the beginning of data transmission to USB, or at the beginning of exposure). Some are generated by the camera hardware's firmware driver whilst others are derived from the computer's system clock.

At #2188 (comment) a RealSense team member explains the differences in operation between timestamp types such as frame_timestamp and time_of_arrival.

I feel that if you want the depth data arrival time to match up with the system clock of the computer then retrieving the time_of_arrival timestamp is going to work best for you so that it can take account of the time required for a frame to be accessible by the user after it has been captured, processed and passed to the user interface.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

MartyG-RealSense commented 1 year ago

This may be more complex than what you had in mind, but the CONIX Center at Carnegie Mellon created a system for networking up to 20 RealSense cameras simultaneously and combining their individual outputs into a single one.

https://github.com/conix-center/pointcloud_stitching

Intel's hardware sync white-paper guide also provides advice about software sync.

https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration#b-collecting-synchronous-frames


It is also possible to align frames in software using either their time-stamp or frame-number. Both approaches are valid but you do need to be careful during the initial step of subtracting the offsets that the frames were not read across a frame boundary – for example one camera could have a frame belonging to a previous frame time. The benefit of using the frame counters is that they will not walk off over time


More information can be found at https://github.com/IntelRealSense/librealsense/issues/2148

chenxiaocongAI commented 1 year ago

. D435 allows Depth to Color sync, and Depth to another Depth sync, but not Color to another Color, so at least one (or both) of the color sensors will not be synced.
Is this implemented under hardware synchronization. Has the official confirmed and conducted experiments to reach this conclusion

chenxiaocongAI commented 1 year ago

I would like to ask if the external HW sync, implicitly offers internal RGB-D sync (which is still in Intel's known bug lists) (on D435i or D455 devices).

MartyG-RealSense commented 1 year ago

Color to color sync has never been supported by hardware sync. It may be possible to instead sync two streams from different cameras using the time_of_arrival timestamp though, as described at https://github.com/IntelRealSense/librealsense/issues/2186#issuecomment-409944362

Depth + RGB sync has never been supported for D435i. Intel explored implementing RGB sync for D455 but decided not to proceed.

In theory, D415 supports depth + RGB hardware sync in Inter Cam Sync Mode 3 ('Full Slave') but in practice RGB sync never worked well and is no longer supported by Intel (though mode 3 is still selectable).

chenxiaocongAI commented 1 year ago

Why cannot depth and color sensors be synchronized in time and not placed on official documents. As an ordinary person, buying an RGBD camera assumes that the depth and color sensors are synchronized by default

MartyG-RealSense commented 1 year ago

On most RealSense camera models except D415 and D455, the depth and RGB sensors are not mounted on the same circuit board inside the camera and the RGB sensor is attached separately to the circuit board via a cable. This means that sync between depth and color has to be performed by software mechanisms in the RealSense SDK.

chenxiaocongAI commented 1 year ago

But you don't have a method to synchronize depth sensors and color sensors on D455 either!

MartyG-RealSense commented 1 year ago

Hardware sync requires support in the firmware driver. Support of RGB sync for D455 in the firmware was considered but implementation was not proceeded with.

chenxiaocongAI commented 1 year ago

Do you have a white paper for multi camera soft synchronization that you can check

MartyG-RealSense commented 1 year ago

No unfortunately, the only reference to multicam software sync is the one that I linked to earlier.

https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration#b-collecting-synchronous-frames

chenxiaocongAI commented 1 year ago

How to ensure that the order of their arrival is consistent when using multi camera synchronization. For example, when synchronizing for the first time, if the data from five cameras arrives in the order of 1, 2, 3, 4, and 5, will all synchronized data arrive in this order. The slave receives a trigger signal and my program receives data. The time difference between the slave's data arrival and the master's data arrival on the PC is 12ms. Can you reduce this time.

MartyG-RealSense commented 1 year ago

It should not matter which order the data of the slave cameras arrives in if their pipelines were all started at approximately the same time, as their timestamps will be synced to those of the master.

It is possible to set a specific start order for the cameras if that is your preference by using the serial number of each individual camera.

If you are hardware synching the cameras with sync cables connecting the cameras together then you could check the electronics of the sync wiring to make sure that there isn't latency that delays the arrival of the trigger from the master to the slaves. If your sync wires are long then the trigger signal could become degraded as it travels over distance.

chenxiaocongAI commented 1 year ago

@MartyG-RealSense Can't the colors, depth, and imu in D455 be turned on simultaneously, otherwise it may cause data lag in depth. I have seen in other issues before that either depth and imu are enabled, or depth and color are enabled. Why is this? What are the issues with opening simultaneously

MartyG-RealSense commented 1 year ago

You should not enable depth, color and IMU simultaneously in the RealSense Viewer because of an issue for which there is no fix, where one of the streams (usually RGB) becomes No Frames Received. It is possible to use all three streams with program scripting with workarounds though.

In Python you can do it by creating two separate pipelines and putting depth + color on one pipeline and IMU on its own on the other pipeline. The best Python example code for this is at https://github.com/IntelRealSense/librealsense/issues/5628#issuecomment-575943238

In C++ language you can accomplish it by using callbacks in your script - see https://github.com/IntelRealSense/librealsense/issues/5291

chenxiaocongAI commented 1 year ago

All filters are spatial Filter consumes the most performance. Is there a document regarding the performance requirements of these types of filters. When I synchronize hardware with 5 cameras, each depth image undergoes a space filter, which sometimes results in synchronization failure due to inability to process (the master frame lags behind the other slots by two frames, and the device time is compared (their respective device times are recorded during the first successful synchronization). Has this issue been encountered by other colleagues

chenxiaocongAI commented 1 year ago

Decimation > Depth to Disparity > Spatial > Temporal > Disparity to Depth What is the difference between Depth to Disparity and Disparity to Depth

chenxiaocongAI commented 1 year ago

@MartyG-RealSense

chenxiaocongAI commented 1 year ago
  1. If you want to use 5 cameras (D435i and D455), can you use i7 CPU for processing. Still need to use a more advanced CPU.
  2. How to confirm that the current data cannot be processed in the case of multiple cameras, causing the kernel to accumulate data
MartyG-RealSense commented 1 year ago

A RealSense team member at https://github.com/IntelRealSense/librealsense/issues/4468#issuecomment-513485662 states that "spatial filter is the one taking the most time and giving least quality improvement, so you might decide to drop it".

There is documentation about processing times for different filters at the link below.

https://dev.intelrealsense.com/docs/depth-post-processing#example-results-and-trade-offs


There is not a simple explanation for the difference between the depth to disparity and disparity to depth filters, except - as the names suggest - one converts a depth map to a disparity map and the other converts a disparity map to a depth map. Technical information about this subject can be found at https://github.com/IntelRealSense/librealsense/issues/7431#issuecomment-700665046


In 2018 the recommendation from Intel was an Intel Core i7 processor or equivalent for 4 cameras. As Intel processor technology has advanced considerably since 2018, a modern-day i7 is likely to be sufficient for 5 cameras.

The most obvious sign of a problem with a large set of cameras may be all but one of the cameras working (for example, 3 cameras working in a set of 4 and the 4th camera not working).

With a multiple camera setup, if you are using a USB hub then you also need to bear in mind the total data bandwidth that the cameras are consuming. The more streams that are enabled on each camera and the higher the resolution and FPS, the more bandwidth that will be consumed. This principle is demonstrated in multiple camera tables provided by Intel at the link below.

https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration#2-multi-camera-considerations

Modern hubs will commonly have a bandwidth allowance of 5 Gbps (gigabits per second), though 10 Gbps models are also available.

If a single hub cannot provide enough bandwidth for the demands of all cameras then you can link multiple hubs together and spread the cameras across them (such as 2 hubs with 2 cameras on each). The USB standard allows up to 5 hubs to be "daisy-chained" together.

chenxiaocongAI commented 1 year ago
  1. Under multi camera synchronization, it is common to send the same depth data (global time, frame ID, and device time are consistent). Why is this? Under what judgment mechanism does the underlying layer choose to send the same depth data as the previous one.
  2. What is the underlying principle for device time frame skipping. For example, the current device time is 0, the next shot is 66666, and the middle 33333 data is lost.
MartyG-RealSense commented 1 year ago
  1. Global Time is enabled by default on RealSense 400 Series cameras. When multiple cameras are being used, Global Time generates a common timestamp for all streams, as described at https://github.com/IntelRealSense/librealsense/pull/3909

  2. I do not know the answer to that question, unfortunately.

chenxiaocongAI commented 1 year ago

Texturecountthreshold, texturedifferencethreshold,scanlinep1, scanlinep1onediscon, : What is the physical meaning

MartyG-RealSense commented 1 year ago

Most of the Advanced Mode functions, including scanlinep1 and scanlinep1onediscon, are not documented. This is because these functions interact with each other in complex ways and so Intel chose to control them with machine learning algorithms. RealSense users are welcome to perform experimentation with the functions to see how changes affect the image though.

However, there is a detailed explanation of the meaning of texturecountthreshold and texturedifferencethreshold provided by my Intel RealSense colleagues at https://github.com/IntelRealSense/librealsense/issues/10608#issuecomment-1183341367

chenxiaocongAI commented 1 year ago

May I ask if there is a c++ example for real-time recording of deep images without using ROS

MartyG-RealSense commented 1 year ago

The RealSense SDK has a C++ example called rs-record-playback that records depth data to a bag file, with an option to pause and resume the recording.

https://github.com/IntelRealSense/librealsense/tree/master/examples/record-playback

chenxiaocongAI commented 1 year ago

Is the depth image (0,0) and color image (0,0) of D455 directly aligned? No action required

MartyG-RealSense commented 1 year ago

Depth and RGB color are not aligned by default.

However, if the D455's infrared stream is used and its format set to output RGB8 color instead of Y8 infrared then the depth and color images will be perfectly aligned by default without the need to apply alignment. This is described at the link below.

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#use-the-left-color-camera

chenxiaocongAI commented 1 year ago

Shouldn't an infrared stream only generate grayscale images? Why do we generate color images。 Why does the color image of the left infrared capture the points of the projector, while the RGB color sensor does not

MartyG-RealSense commented 1 year ago

The monochrome color is because of the Y8 infrared format. The D405, D415 and D455 camera models are capable of providing color from the left infrared sensor by setting infrared to a color format instead of Y8 infrared.

The image contains IR dots because the camera's projector component casts a pattern of dots onto objects in the real-world scene to aid in depth analysis. These are invisible to the human eye and only visible on infrared.

You can make the dots less visible by reducing the value of the Laser Power setting (though this reduces the quality of the depth image) or remove them completely by disabling the projector by setting Emitter Enabled to Off.

chenxiaocongAI commented 1 year ago

Is there any method for removing these projectors from images? ColorCorrection={0.520508, 1.99023, 1.50684, -2, -2, -0.0820312, 1.12305, 1.01367, 1.69824, -2, 0.575195, -0.411133 Can this method effectively remove project points

MartyG-RealSense commented 1 year ago

The D415 model supports a Visual Preset called Left Imager Color w/o IR pattern that removes the dot pattern from the IR image.

image

Iit does so by configuring the color correction matrix to a certain configuration.

https://raw.githubusercontent.com/wiki/IntelRealSense/librealsense/d400_presets/D415_RemoveIR.json

Those particular color correction values do not affect the dots on the D455 model though.

chenxiaocongAI commented 1 year ago

Can't the dot pattern on D455 be removed?

MartyG-RealSense commented 1 year ago

I cannot recall a past case where somebody has succeeded in doing so.

chenxiaocongAI commented 1 year ago

“depth units” (the depth step size) to 100um instead of the default 1000um。How to set it in C++code because my depth range is below 2m. Is there a better setting