Open CBadam opened 1 month ago
Actually, I did more tests, and I understand that the last stream I enable in the recorder is the one that won't have the same timestamps as the other streams. if i do this in the recorder:
config.enable_stream(ir_profile) config.enable_stream(depth_profile) config.enable_stream(color_profile)
I get this in the playback: index= 0: color_timestamp=1726842583477 index= 0: depth_timestamp=1726842583410 index= 0: ir_timestamp=1726842583410 index= 1: color_timestamp=1726842583543 index= 1: depth_timestamp=1726842583477 index= 1: ir_timestamp=1726842583477
I will reproduce and test the issue based on your feedback next week.
@CBadam This is a bug in the Orbbec SDK C++ library. Frame synchronization is not supported during playback. We will fix this issue in a future C++ version. After the fix is applied, you will be able to enable frame synchronization in your playback code to resolve the issue.
pipeline.enable_frame_sync() //add this code pipeline.start()
After the bug is fixed, I will notify you in this issue.
@zhonghong322 Thank you for your response. I will be waiting for the bug fix. In the meantime, is there another way to generate colored point cloud? (A function where I can give it the depth frame and the color frame). Because I need to generate point clouds, but since I am reading from a bag, there is the mismatch between color and depth timestamps. So the frames.get_color_point_cloud(camera_param) generates a bad colored point cloud.
Description: I am using the pyorbbecsdk with a Femto Mega camera to record and playback data. The issue arises when recording data with recorder.py (activating both depth and color sensors) into a bag file, and then playing it back using the playback.py example. I observe that the color frame and depth frame in each frameset do not share the same timestamp during playback.
It appears that the color frame corresponds to the depth frame of the next frameset, leading to mismatches between depth and color frames. This becomes a significant problem when trying to save colored point clouds, as the color and depth data from different frames do not align correctly, resulting in inaccurate visualizations.
Below is an example of how I am testing playback with the playback.py:
Output Example: index= 0: color_timestamp=2193546 index= 0: depth_timestamp=2193345 index= 1: color_timestamp=2193747 index= 1: depth_timestamp=2193546
As you can see, there is always a one-frame offset between the color and depth timestamps, with the color frame lagging behind the corresponding depth frame.
Here is the code for how I recorded the data:
Additional Notes:
When I activate the infrared (IR) sensor alongside depth and color, the IR frame inherits the one-frame delay, and the depth and color frames align with each other. The issue seems to arise only during playback, as the timestamps match correctly during recording. I would greatly appreciate any insights or solutions to ensure that depth and color frames are synchronized during playback, allowing for proper point cloud generation.
Thank you!