Supporting timestamps: list would be nice in the case that we want to push a chunk of samples with irregular timestamps. This is done, e.g., in the neural data simulator project to publish spike timestamps over LSL. The current neural data simulator implementation uses multiple push_sample calls, which is slower than a single push_chunk call.
Implementation
Because calling libraries depend on pylsl's speed, it was important that the default .push_chunk(timestamp: float) function call did not slow down with this new feature. Thus, the current default code-path assumes timestamp: float and falls back (through catching an exception) for the case that timestamp: Union[list, tuple, np.ndarray].
See testing section below for an informal performance test.
pylsl/examples/PerformanceTest.py performs similarly before and after the change, based on python3 -m cProfile -o pylsl.cprof pylsl/examples/PerformanceTest.py
Before:
After:
Both before and after, push_chunk consumes about ~0.7% of compute-time.
Note: I'm not sure why some of the other parts of the call-graphs are different. It might have to do with exactly when I Ctrl-C out of the Python script, as sometimes it took the first format and sometimes it took the second.
This was not a very rigorous test, but this matches intuition about the implementation because try: clauses are very fast unless an exception is raised.
Motivation
Supporting
timestamps: list
would be nice in the case that we want to push a chunk of samples with irregular timestamps. This is done, e.g., in the neural data simulator project to publish spike timestamps over LSL. The current neural data simulator implementation uses multiplepush_sample
calls, which is slower than a singlepush_chunk
call.Implementation
Because calling libraries depend on
pylsl
's speed, it was important that the default.push_chunk(timestamp: float)
function call did not slow down with this new feature. Thus, the current default code-path assumestimestamp: float
and falls back (through catching an exception) for the case thattimestamp: Union[list, tuple, np.ndarray]
.See testing section below for an informal performance test.
Testing
On a separate branch (not planned for committing), we checked that
.push_chunk(timestamps: list)
matches the received.pull_chunk(timestamps)
values.Performance
pylsl/examples/PerformanceTest.py
performs similarly before and after the change, based onpython3 -m cProfile -o pylsl.cprof pylsl/examples/PerformanceTest.py
Before:![image](https://github.com/labstreaminglayer/pylsl/assets/3221512/8749a5d8-16b5-4700-aca0-1859eb84df75)
After:![image](https://github.com/labstreaminglayer/pylsl/assets/3221512/654d902b-3fe3-4c14-bccf-24b9153861ca)
Both before and after,
push_chunk
consumes about~0.7%
of compute-time.Note: I'm not sure why some of the other parts of the call-graphs are different. It might have to do with exactly when I
Ctrl-C
out of the Python script, as sometimes it took the first format and sometimes it took the second.This was not a very rigorous test, but this matches intuition about the implementation because
try:
clauses are very fast unless an exception is raised.