Closed guanming001 closed 4 years ago
Hi @tesych
I have tried to observe the sync pulse and IR pulse from two Kinect using an oscilloscope. Below are some of my observations and 3 questions to clarify.
Overview of the setup:
The digital output of infrared sensor is connected to the oscilloscope (Note: when IR light is detected, the voltage drops to 0V). Two separate Kinect Viewers are used to configure the two Kinects as master/subordinate, update rate of 30 Hz and set 160 us depth capture delay in the subordinate.
Overview of all 3 signals: Yellow: Master IR Cyan: Subordinate IR Magenta: Sync pulse (its pulse width is very short, may have to enlarge the image to observe the thin spikes) Q1. There seems to have quite a large delay between the sync pulse and the IR signals, is this expected?
With the setting of 30 Hz, the period of each pulse is also 33.3 ms:
The sync pulse is indeed more than 8 us:
Q2. Within a 12.8 ms interval, there are 9 short pulses of IR, is this expected?
With the setting of 160 us depth capture delay, the interval between master and subordinate IR signal is also around 160 us:
Q3. For each Kinect is the interval between each IR pulse 1600 us? As a result, the maximum number of Kinect that can be set up without causing interference to each other is 10 as stated in the Sync multiple Azure Kinect DK devices
Nice write up!
The depth camera laser on time is less than 160us, 160us ends up having a safety margin (over the 125 us in the table below) to ensure variations is sensors don’t cause issues with software. There are indeed 9 pulses & images that the sensor uses to capture the depth engine. The exact timing changes based on the depth mode you are using.
Using the table below the exposure time can be calculated as:
Exposure Time = (IR Pulses * Pulse Width) + (Idle Periods * Idle Time)
Depth Mode | IR Pulses |
Pulse Width |
Idle Periods |
Idle Time | Exposure Time |
---|---|---|---|---|---|
NFOV Unbinned NFOV 2xx Binned WFOV 2x2 Binned |
9 | 125 us | 8 | 1450 us | 12.8 ms |
WFOV Unbinned | 9 | 125 us | 8 | 2390 us | 20.3 ms |
The table above is the raw timing of the sensor and should match up with your scope measurements. As you can tell, the IR on time is ~125 us. Due to variations in timing for the firmware, and it minimum timing resolution of 11us, we recommend being no closer than 160us. The idle time between each IR pulse for NFOV is ~1450us. This idle time gives us enough laser off time to interleave 9 more sensors allowing us a total of 10 depth cameras linked together all being 160us of phase off each other.
This leaves you with 20.5 ms to get your mocap system running without interfering with the depth cameras.
Hi @wes-b Thank you for your reply!
Can I also check if there is any delay between the master sync pulse and the start of the IR pulse?
Can I also check if there is any delay between the master sync pulse and the start of the IR pulse?
In single device mode the color and depth images are center of exposure aligned. Meaning the firmware would do what it needs to do so that the middle of the image or the two images are aligned. In this case the the color and depth image start at different times because they have different exposures.
In multi camera scenarios the color and depth images become start of exposure aligned. So in your capture you have 160us delay on IR pulse from sync pulse (color camera VSync) because you set depth_delay_off_color_usec = 160us. If it were zero, you should not have seen a delay at all.
Hi, I think it doesn't awnser the question Q1. In this case there is ~11ms between the sync pulse send by the master and the first IR pulse. I thought the depth sensor was supposed to start directly after the sync pulse.
Color Camera is what starts on the pulse. Then you have to account for differences in exposure time to account for the delay from the sync pulse to the start of the depth camera.
Thank you for the answer. I still have one question.
I did some tests with a setup similar to @guanming001 but with only one kinect in master mode. I remarked that the delay between the sync pulse and the first IR pulse change depending on the color exposure. Also, the timestamp return by k4a_image_get_device_timestamp_usec
for the color and depth camera is almost the same (<22us diff).
Could you answer the following questions:
k4a_image_get_device_timestamp_usec
corresponds to the start of the frame or the center ?k4a_image_get_device_timestamp_usec
returns a center of exposure timestamp. It is the center of exposure that the camera is aligning.Thank you for the delay calculation it anwser perfectly. But for the timestamp, what does the doc mean by:
Devices that are acting in master or subordinate roles report image timestamps in terms of Start of Frame instead of Center of Frame. (Timestamp considerations)
Good catch I opened #1107
Hi @wes-b, thank you for providing this information, it's very helpful!
I have a few follow-up questions about a multi-camera setup.
Can you explain how to convert the exposure values given in the docs into actual exposure times for the formula (color exposure)/2 - (depth exposure)/2
? I don't understand the units of that table.
Alternatively, if I'm using the SDK, I think I can manually set the color exposure time with this command, am I reading it right that I just pass in an integer number of microseconds, and I can choose whatever value I want (say between 0 and 33,333)?
This may entirely negate the two questions above but -- this will be done while providing an external syncing signal (ie from a Teensy) to two Azures. I plan to run the two Azures both in subordinate mode, with the signal chain in series like external sync --> Azure 1 --> Azure 2
(note that docs say that one should still be in master mode, but I have found that in that case it doesn't wait for the external sync). Originally I had kept color ON for Azure 1 out of deference to the docs, but after more testing, it seems that I can turn color OFF for both Azures without any loss of performance. So the question is, for Azure 1, assuming depth_delay_off_color_usec
and subordinate_delay_off_master_usec
are both 0, does the first IR pulse start with the arrival of the syncing signal? And for Azure 2, if I set subordinate_delay_off_master_usec
to 160 us, even though Azure 1 isn't technically in "master" mode, Azure 2 should still receive a sync signal and start its first IR pulse then, right?
In this case, the returned timestamps will still be the center of the 12.8 ms period?
Thank you!!
Hi
I would like to eliminate the interference issue between Kinect Azure and external Mocap system.
Thus, I would like to understand more detail on how does the timing of Kinect ToF depth camera works, such that I can time the Mocap system to avoid strobing its IR when Kinect IR is active.
I am also confused between the exposure time (e.g. 12.8 ms for NFOV) and the requirement of at least 160 microseconds #436 between depth cameras to prevent interference.
Thank you.