Open LimHyungTae opened 2 months ago
Thanks for spotting this bug :nerd_face:! Based on an initial investigation, the issue is located on the first cloud of the sequence. This issue might be caused by recording data in raw format (UDP packets) and reconstructing the clouds using the Ouster library. Given that the record can start at any point during the acquisition of a scan, it might be possible that the first scan is broken.
If you could share the timestamp or the ID of the clouds that appear broken, we can further investigate the issue
Wow, thanks for your swift reply, bro!
In fact, there are a few t
s over the expected boundaries, and most bag files have the issues.
Originally, I wanted to deskew the point cloud. And here is my code snippet to detect the abnormal situation in_save_cloud
function:
timestamp_in_sec = timestamp / 1e9
time_diff_for_each_point = data.points["t"] / 1e9
time_for_each_point = timestamp_in_sec + time_diff_for_each_point
min_value = np.min(time_diff_for_each_point)
max_value = np.max(time_diff_for_each_point)
# 0.15s are heuristic values, but times should be within the 0.15 sec since the Hz of point clouds are 10-20Hz!
are_normal_values = min_value >= -0.15 and max_value <= 0.15
if not are_normal_values:
print(f"Unexpected 't' values have come. Skip {timestamp_in_sec} frame")
deskewed_points = np.stack([data.points["x"], data.points["y"], data.points["z"]], axis=1)
And once I printed them, their values are 3.xx, which is quite larger than we expected.
For instance in coloseeo_train0
(please ignore the GTData timestamp out of range
message),
In spagna_train0
,
As far as I know, point['t'] should be 0.0. - near 0.1, yet I observed that some point clouds have erroneous values: for instance, in the
campus_train1
sequence, the time values of some points are3.604361924
.Thus, these values sometimes make the deskewing imprecise. By any chance, do you happen to know why these situations occur when you guys gather data? Take your time, and feel free to let me know :)