A possibly recurrent user scenario is to have data as two arrays of the same length:
possibly-repeating timestamps
matching data (e.g. points, etc.)
For the rr.send_columns API, one must prepare a list of non-repeating timestamp, and the matching "partitions", i.e. the size of the groups in the data that corresponds to each non-repeating timestamp. It's sufficiently annoying to do with numpy that this would warrant an helper, such as to make the following code more compact:
points = ... # numpy array of points
times = ... # numpy array of possibly-repeating timestamps
# indices at which `times` changes, excluding 0, including `n`
change_indices = (np.argwhere(times != np.concatenate((times[1:], [np.nan]))).T + 1).reshape(-1)
# non-repeating timestamps
non_repeating_times = times[change_indices - 1]
# partitions (e.g. size of groups corresponding to each
partitions = np.concatenate(([change_indices[0]], np.diff(change_indices)))
assert np.sum(partitions) == len(times)
# logging
rr.send_columns(
"/entity/path",
[rr.TimeSecondsColumn("time", non_repeating_times)],
[rr.components.Position3DBatch(positions).partition(partitions)]
)
A possibly recurrent user scenario is to have data as two arrays of the same length:
For the
rr.send_columns
API, one must prepare a list of non-repeating timestamp, and the matching "partitions", i.e. the size of the groups in the data that corresponds to each non-repeating timestamp. It's sufficiently annoying to do with numpy that this would warrant an helper, such as to make the following code more compact: