cortex-lab / KiloSort

GPU code for spike sorting
GNU General Public License v2.0
176 stars 100 forks source link

Applying to three-dimension devices #157

Open umiu60135 opened 6 years ago

umiu60135 commented 6 years ago

Hello!

Since the code can only be used for a 2D device now, I am wondering if it is easy to change the code for applying to 3D devices.

Thank you!

Best, Zhongpeng

marius10p commented 6 years ago

Hmm, the position information is mostly used for visualization in Phy, and the underlying algorithm does not depend on it. It would require some changes nonetheless. I wonder though if you truly have a 3D array channels, or whether it could be broken down into multiple shanks isolated from each other. In that case you could use the kcoords parameter to sort the shanks separately.

umiu60135 commented 6 years ago

Thank you so much! However, our channels are based on a kind of very flexible device whose electrodes distribute kind of randomly in space, so it seems not suitable to break down the channels into multiple shanks.

marius10p commented 6 years ago

Ok, I think Kilosort is ideal for that sort of thing because it doesn't force templates to be localized over channels like all other pipelines do. Instead it uses a low-rank decomposition to regularize waveforms. It also drops any channels with magnitudes less than 1/10 of the peak.

I had a look through the code, and I think the only place we are using x,y coordinates is in the local whitening step. You can avoid it using whitening mode to 'full', which whitens all channels together. If you have a lot of channels though, you might want to figure out how to run this function with 3D channel positions (it needs to know which channels are close together):

https://github.com/cortex-lab/KiloSort/blob/0fbe8ebbae4ebddf44fdfe60bb6b4d9c1888826f/preprocessData.m#L183

Speaking of which, how many channels do you have? If it's too many you might run into other problems too....

umiu60135 commented 6 years ago

Actually we usually have 32 or 64 channels in our devices now. By the way, do you mean that I can't use Phy by saying "the position information is mostly used for visualization in Phy", or just can't use some functions about the positions in Phy? Can I still refine my results though I don't provide the position information?

Thank you!

marius10p commented 6 years ago

Phy requires some position information to show waveforms spatially. You can specify it however you want, including as a single column of channels if you don't have any other information. That is actually assumed as default if you don't provide it.

brendonw1 commented 6 years ago

Yes you can still use Phy to refine your spikes. Phy is just for human interaction and the spatial arrangement capacities are just to make you as a human better understand the spikes you are looking at. But if you have splayed out carbon fibers or something and can't know the spatial arrangement then you can just display it however you like/can and you can definitely still use phy. One note is that it'll be a bit more painful since phy tends to give you better visualization of channels not just by laying them out but also by zooming into regions where there are template-contributing channels. If htose are not spatially localized it won't really be able to reliably zoom since the channels for any given template will be scattered. Don't know if there's a way to address this but I think not unless you do some coding yourself (ie tell it to just show the channels used and toss out spatial layout among them so you can compress them spatially for zoomed in high res viewing) Fundamentally it'll work though.

On Sat, Aug 18, 2018 at 10:02 AM Marius Pachitariu notifications@github.com wrote:

Phy requires some position information to show waveforms spatially. You can specify it however you want, including as a single column of channels if you don't have any other information. That is actually assumed as default if you don't provide it.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/cortex-lab/KiloSort/issues/157#issuecomment-414060174, or mute the thread https://github.com/notifications/unsubscribe-auth/ADXrTYKetiv5N4IVdzbss_q0HiuwMNwHks5uSB6AgaJpZM4WCSVW .

nsteinme commented 6 years ago

could construct a distance metric between each pair of channels based on how many templates are observed on both of those two channels (and how similar the templates appear on the two channels), and then use multi-dimensional scaling or t-sne or whatever to place the channels in 2D based on that distance matrix? :)

sin-mike commented 6 years ago

@marius10p @umiu60135

I do have true 3d recordings of the surface and of the in-depth silicon electrodes simultaneously. What I did is a workaround with different scales. E.g. I changed the scale for the surface electrode 100x times so that the principal group of the surface sites could be connected to the adjacent silicon electrodes. Then I was able to see spike propagation. It didn't affect any of the processing, though, as Phy/Kilosort doesn't require this info for the clusterization. But it was much better for the visualization part. I must admit, that it would be a lot handier to check/uncheck channels for plotting on the fly.

BR Mike

umiu60135 commented 6 years ago

Hi Mike,

Thanks for sharing your idea. I'm really interested in it, but I'm sorry I don't totally get it. Could you provide some further physical context? Thank you so much!

Best, Zhongpeng