csn-le / wave_clus

A fast and unsupervised algorithm for spike detection and sorting using wavelets and super-paramagnetic clustering
127 stars 66 forks source link

Combining data from multiple recording sessions #183

Closed drBush closed 2 years ago

drBush commented 4 years ago

Hi Fernando

Sorry to bother you, and thanks for this incredibly useful toolbox. I am analysing single channel microwire recordings from the human brain, but there is one issue that I keep coming back to - is it possible to combine multiple recordings from the same patient, performed in the same session, to identify clusters using data from all sessions, rather than identifying clusters separately in each session and then spending time determining how they relate to each other? I believe that Combinato offers this function, but I would prefer to work in Matlab than Python if possible.

Thanks in advance for your help

Dan

ferchaure commented 4 years ago

Hi Dan,

If the sessions are close enough, that the drifting is not a big issue, you could concatenate the spikes and index in the spikes_ files. Before concatenate add a big number to all the indexes of the later session to easily separate the results latter.

Then just use Do_clustering with this concatenated spikes file.

drBush commented 4 years ago

That's great - thanks for the fast reply. I'll try that solution with our data and see how it looks.

Cheers

Dan

geenaiannilab commented 3 years ago

Hi there, also interested in doing this -- is it also necessary to concatenate the ''psegement'' portion of the individual spikes files? can you say briefly what this variable is -- I could not find it in the params file, though I notice it is different for different spikes files. Thank you!

ferchaure commented 3 years ago

Hi, psegment is not necessary. It's just a downsampled version of a minute of data to show in the gui window

geenaiannilab commented 3 years ago

Hi again, did as you suggested & concatenated both the spikes & the indices (adding a big number between sessions) -- however, now when i try to run do_clustering, I receive an error "reference to non-existent field, tmin" in the function find_temp.m In my set_parameters.m file, it is clearly defined as par.tmin = 0. (line 36), and gets passed to do_clusering_single.m (within the par_file variable), but I think is being overwritten/deleted somehow by the update_parameters function. Please let me know if you have a solution on how to fix. thanks much.

ferchaure commented 3 years ago

Check if that parameter is inside the par variable of the file you created (par variable inside the spikes file)