luxonis / depthai

DepthAI Python API utilities, examples, and tutorials.
https://docs.luxonis.com
MIT License
919 stars 231 forks source link

Question about the timestamp returned by `dai::ImgFrame::getTimestamp()` method #799

Closed lenardxu closed 1 year ago

lenardxu commented 1 year ago

Greetings! I'd like to ask you one question about the timestamp. I am now working on synchronization between OAK-D camera and external IMU sensor. For that, I need to consider the latency on both sides, including the latency between image capture and its retrieval on the host (as you shared in this link). For example, in case of mono 400p, the latency is about 7.5ms. Instead of additionally setting timestamp right after the method dai::DataOutputQueue::get<dai::ImgFrame>() and then compensating for that latency, which is not that accurate, I intended to use your built-in method getTimestamp() of dai::ImgFrame, since I understand that the returned timestamp by getTimestamp() should have already considered that latency (since the returned ts is actually the one assigned when image being captured) and also be based on host time base. Right?

Erol444 commented 1 year ago

Hi @lenardxu Yes, the ts for global shutter sensors is set a few microseconds after the exposure when image is read from the sensor. Thanks, Erik

lenardxu commented 1 year ago

Hi @lenardxu Yes, the ts for global shutter sensors is set a few microseconds after the exposure when image is read from the sensor. Thanks, Erik

Thanks for your reply. I then did some experiments on the measuring the latency upon the ts returned by built-in method getTimestamp() of dai::ImgFrame and ts returned by std::chrono::steady_clock::now(). And I found that each time when the first frame arrives the host, the corresponding latency is way bigger than the latency of the following frames (say 1st: 40ms, 2nd: 9.13ms, 3rd: 9.10ms, 4th, 8.93ms, ...). And the first one is fluctuates every time when experiment is repeated, say 55ms <-> 14ms. Is that normal?

Erol444 commented 1 year ago

Hi @lenardxu , I think this is because of several things:

lenardxu commented 1 year ago

Hi @lenardxu , I think this is because of several things:

* camera sensors (the whole system actually) are initializing, so there's some delay

* camera sensors are doing their software syncing

* depthai constantly syncs host clock and device clock, and it might not be very accurate at the start
  Thoughts?

Thanks for your hints. About the second reason, my setup is always the left mono camera being activated w.r.t. its pipeline and output queue. Thus, I estimate that there is no syncing executed. About the first and third, I am not sure. But then here is an interesting finding upon my further experiment. When I still use multi-threads for software syncing imu (imu worker thread) and mono left camera (camera worker thread) measurements, but discard the first image frames in the camera worker thread for a fixed period, say 1s, (because I intended to use the frames except the first ones in order to take use of the small and fixed latency, as I presented in the last post) whilst other threads are also waiting for the same while, as a result, the latency keeps yet the same throughout the program execution, which is large (say, 1st, 42ms, 2nd 42ms, ... 200th, 42ms) and not fixed across each experiment (1st exp: 42ms; 2nd exp: 20ms; 3rd exp: 38ms). That looks very odd to me. Do you have any idea about that? And here is a screenshot of how the oddly large and constant latency looks like in the phase of simply being discarded and the phase of syncing: problem_of_camera_latency

lenardxu commented 1 year ago

Continuing the last post... Upon my further experiment, where the latency measure is executed separately without any other thread involved, the latency is approximately the same as shared by you (https://docs.luxonis.com/projects/api/en/latest/tutorials/low-latency/#low-latency). However, when it comes to the software sync part where multi-threads (one thread for camera sending out images, one thread for imu and the last one for interpolation upon visual cue) are applied, the latency is a problem, which is the same as shown in the last post. Hence, I guess whether data (image) transmission via usb3.1 has a "collision" with data (imu) transmission via (software I2C in my case)?

themarpe commented 1 year ago

@lenardxu Just a note: I'd suggest relying on getDeviceTimestamp() for syncing as host synced getTimestamp() isn't monotonic at the moment and can fluctuate depending on amount of traffic going to/from device (something we have yet to fix)

lenardxu commented 1 year ago

Thanks, I'll try using getDeviceTimestamp() to get the problematic getTimestamp() corrected in my case.

Erol444 commented 1 year ago

Added IMU + frame syncing demo here. cc @lenardxu

CemEntok commented 3 months ago

@lenardxu Just a note: I'd suggest relying on getDeviceTimestamp() for syncing as host synced getTimestamp() isn't monotonic at the moment and can fluctuate depending on amount of traffic going to/from device (something we have yet to fix)

Hi @themarpe, is there an update that we can use getTimestamp() safely for synchronization of Camera and external IMU?

themarpe commented 3 months ago

@CemEntok the host syncing was much improved since then (much tighter sync), but I'm not exactly sure if it also addressed monotonicity (CC: @asahtik )

asahtik commented 3 months ago

Hi @CemEntok, the improved host syncing is not monotonous. We ran some experiments using double exponential smoothing to address this issue, but got overall worse results compared to the current method.

CemEntok commented 3 months ago

Thnak you for your comments @themarpe @asahtik

however, as you described here, host-clock syncing is below 200 μs for USB connection, so i believe even perfect sync is not possible, the closest timestamps under the couple of milliseconds of accuracy, can be obtained from separated camera and IMU threads. For example,

if I get rgb | ts: 421389.067 at time: 421389.101 with print(f"ts: {self.output_queue.tryGet().getTimestamp().total_seconds() :.3f} at time: {dai.Clock.now().total_seconds():.3f}")

IMU Data: 2.703 | Timestamp: 421389.065. getTimestamp() gives 2 ms difference, so if you think those getTimestamp() outputs are accurate in subms, they can differ at maximum 2 ms which results in 4 ms difference in reality, that does not affect much, does it? And when I display them at time: 421389.101, it becomes roughly 36 ms of delay if we exclude time elapsed for cv2.imshow() and synchronization script. The thing i would like to emphasize is that around couple of ms inaccurate getTimestamp() information does not affect the overall delay which is in the order of 10ms'.

themarpe commented 2 months ago

@CemEntok so I think the main difference here is that the "delay" should not matter. The delay will raise and lower depending on how much other traffic is also generated, but actual frames&imu samples have to be matched with the corresponding timestamps, even if that is later in time (so not "fully realtime" - perhaps with 20-50ms latency)