Open shockjiang opened 1 year ago
Hi @shockjiang Please note that this is more a question than a feature request. The best place for questions is the Stereolabs community forum: community.stereolabs.com
However, what you write is not correct. The CMOS sensors are rolling shutter sensors and they are read line-by-line by the ISP on the camera, but they are transmitted to the host device as two complete synchronized frames.
The timestamp corresponds to the host system clock in the exact moment when the frames are ready in the buffer of the USB3 controller.
Thank you for your response.
left and right images are sampled and transmitted synchronized. (per line)
No, they are sampled per line and transmitted to the host per image
still I wonder, for the left image, how can I get the accurate timestamp that CMOS of its middle line sample the world?
This is not possible. The HW does not allow precisely retrieving this kind of information.
You can estimate the mean latency by pointing the camera to a monitor showing a stopwatch. Here's an example of how to do that: https://www.youtube.com/watch?v=jmVeFdKxZDc
For a GPS + VO fusion application, if the app cannot get the precise time of the camera "taking" phone, the performance of fusion may be greatly affected. Am I right? If so, there is a need for the feature of providing accurate timestamp.
What's more, is there an API to set the time to the camera's OS?
For a GPS + VO fusion application, if the app cannot get the precise time of the camera "taking" phone, the performance of fusion may be greatly affected. Am I right? If so, there is a need for the feature of providing accurate timestamp.
The method currently used to retrieve the frame timestamp is the most precise as possible.
What's more, is there an API to set the time to the camera's OS?
Can you explain better? The question is not clear.
For a GPS + VO fusion application, if the app cannot get the precise time of the camera "taking" phone, the performance of fusion may be greatly affected. Am I right? If so, there is a need for the feature of providing accurate timestamp.
The method currently used to retrieve the frame timestamp is the most precise as possible. I cannot agree. about 30-50ms delay? GPS sample rate is ~10Hz (100 ms), seem to be a quite a long time. If you check flir or basler, you would find that they even allow callback function on starting and finishing exposure.
What's more, is there an API to set the time to the camera's OS?
Can you explain better? The question is not clear. Does you camera hold same "system time" as linux? If so, how can we set a time for it.
In my first reply, I wrote this:
The timestamp corresponds to the host system clock at the exact moment when the frames are ready in the buffer of the USB3 controller.
This is how the timestamp is set for the ZED. The internal ISP does not allow retrieving the timestamp in a more precise way. I think this also replies to your latest question.
If you use the GPS with a rate of 10 Hz, and the ZED latency is 30-50 msec, then it's easy to assign each frame to the correct GPS datum. It would have been a problem if the latency was higher than the GPS period.
Thank you so much @Myzhar I plan to buy one ZED2 for my project, and I hope ZED is able to provide better timestamp in the future.
It's my pleasure. Do not hesitate to write an email to support@stereolabs.com if you need help.
Preliminary Checks
Proposal
According to the document here: https://www.stereolabs.com/docs/api/classsl_1_1Camera.html#a3cd31c58aba33727f35aeae28244c82d, timestamp for each frame is the time that indicate the ending of each frame's readout:
Is the above description right?
In some case, we really need an accurate time, which could be the middle time between the start and end of frame's sampling (readout), which is an accurate timestamp for the frame.
Use-Case
realtime multi-sensor fusion for navigation
Anything else?
No response