csn-le / wave_clus

A fast and unsupervised algorithm for spike detection and sorting using wavelets and super-paramagnetic clustering
127 stars 66 forks source link

Timestamp awareness of .mat files #210

Closed AHEsmaeili closed 2 years ago

AHEsmaeili commented 2 years ago

Hi Fernando!

I've recently came upon waveclus, and want to use its automatic/batch sorting capabilities for processing of my single channel .mat files. These files were converted from Alphalab .map_ files (since the latter format is currently not supported by the package) and contain continuous data.

Following the guidelines in the wiki, I made a .mat file with two variables: data (for the continuous data), and sr (for the sampling rate, 25 KHz).

Understandbly however, since there was no indication of start and end times of the recorded data, the timestamps of the results of this spike sorting do not align with the actual timestamps of the recordings.

I wanted to ask if there is a way to make the scripts aware of the actual timestamps of the continuous .mat files, either as an extra variable or file.

Thanks for your insight.

ferchaure commented 2 years ago

I would say the easiest alternative is saving the time offset in the sample zero and then add it for your processing after wave_clus. But if you want you can add a way to read the map files (or just a custom file type with a made-up extension), check a reader code. You have to change just the function to convert samples in milliseconds adding the offset.

AHEsmaeili commented 2 years ago

Thanks for your response Fernando! Will try that asap.

Two more questions:

1) The data for each session are divided into separate files. In the way that wave_clus handles sorting, would the results of the sorting be reliable if I zero (or mean) pad the time gaps between the experiment blocs in each session?

2) The timestamps that wave_clus calculates are three orders of magnitude larger than the scale of the ones in the raw data. I was wondering if I'm setting the sampling rate in the test file (25000 Hz) incorrectly, or if wave_clus generates the spike times in ms scale automatically (and whether that could be changed via a parameter).

I've attached a sample file that I use for testing.

Start timestamp: 3305.35936 (seconds) End timestamp: 3528.76032 (seconds) testFile.zip

ferchaure commented 2 years ago
  1. Wave_clus doesn't use the timing for clustering, therefore it's possible to sort a recording concatenated in that way (just take care with the output timestamps and real timestamps). Be careful, if you have a moving animal or long periods between sessions the waveforms could change a lot and you could get more that one cluster per neuron.
  2. The times in wave_clus are in ms, it's a design choice.
AHEsmaeili commented 2 years ago

That's great!

The time between the experiment blocs is on the scale of (~5) minutes and the animal was head fixed, but I will try to take care of the timestamp conversion after concatenating (admittedly an intricate procedure).

Many thanks for your insight, and for this great package Fernando.