I opened https://github.com/raphaelvallat/yasa/issues/50#issue and the same improvement could be done here, too. The idea is to avoid for loop, which takes more time than numpy ufuncs, especially when the number of events is huge.
My proposal for _index_to_events is the following:
def _index_to_events(x):
x_copy = np.copy(x)
x_copy[:,1] += 1
split_idx = x_copy.reshape(-1).astype(int)
full_idx = np.arange(split_idx.max())
index = np.split(full_idx, split_idx)[1::2]
index = np.concatenate(index)
return index
Two potential problems here: first, there might be a reason to avoid scipy here; second, this is not fully compatible with the current implementation, as it would merge events that overlap in time. Are they really problems or not in the scope of the usage of this function?
Please tell me how you think about it. Is it really beneficial to do so, etc :)
I opened https://github.com/raphaelvallat/yasa/issues/50#issue and the same improvement could be done here, too. The idea is to avoid for loop, which takes more time than numpy ufuncs, especially when the number of events is huge.
My proposal for
_index_to_events
is the following:This is fully compatible with the current implementation according to my tests. For benchmarks, see https://github.com/raphaelvallat/yasa/issues/50#issuecomment-1000740304.
For
_events_to_index
I tend to usescipy
,Two potential problems here: first, there might be a reason to avoid
scipy
here; second, this is not fully compatible with the current implementation, as it would merge events that overlap in time. Are they really problems or not in the scope of the usage of this function?Please tell me how you think about it. Is it really beneficial to do so, etc :)