Closed dmkobylkov closed 1 day ago
Hi, do you export each tetrode separately to phy? I'm that case I think that the problem is that there are no units on those tetrodes
Hi @alejoe91, yes, I export them separately, but there is a unit on this tetrode with several thousand spikes.
Ah! Phy doesn't work with a single unit...that's a bug on their side. I would recommend aggregating the output of all tetrodes and export everything to phy
Oh, I see. Thank you very much! Could you, please, tell me how can I combine the sortings?
@alejoe91, Just to explain the problem that I am dealing with. I have 64ch Open Ephys data (16 tetrodes). But my channels are not mapped correctly. I could not find any information on how to perform remapping in spikeinterface, so I am trying to do sorting on groups of tetrodes.
For the remapping, you can easily use probeinterface
. See documentation here about mapping: https://probeinterface.readthedocs.io/en/main/examples/ex_05_device_channel_indices.html
Thank you very much, @alejoe91! I am sorry to bother you further, but I am not sure that I understand how to deal with the dict that I have now after sorting by group. I want to concatenate the sorting results and export them to phy, but I cannot find anything on it in the documentation.
Btw, is it a typo in the documentation in this part? Should it be sub_recording instead of recording in the loop:
# loop over recording and run a sorter
# here the result is a dict of a sorting object
sortings = {}
for group, sub_recording in recordings.items():
sorting = run_sorter(sorter_name='kilosort2', recording=recording, output_folder=f"folder_KS2_group{group}")
sortings[group] = sorting
Here is the code that I have so far.
from probeinterface import Probe, ProbeGroup, generate_tetrode
from probeinterface.plotting import plot_probe_group
import pandas as pd
import numpy as np
import os
import shutil
import matplotlib
matplotlib.use('TkAgg') # Ensure TkAgg (or another GUI-based backend) is used
#import seaborn as sns
import matplotlib.pyplot as plt
import spikeinterface.full as si
global_job_kwargs = dict(n_jobs=4, chunk_duration="1s")
si.set_global_job_kwargs(**global_job_kwargs)
import spikeinterface.extractors as se
import spikeinterface.preprocessing as pp
import spikeinterface.widgets as sw
from spikeinterface import sorters
from spikeinterface.exporters import export_to_phy
from spikeinterface.exporters import export_report
datapath = r'C:\Users\dmitry.kobylkov\Documents\timePerception\2024-05-24_09-12-31\Record Node 102\experiment1'
dataLocal = se.read_openephys(datapath)
#Given channel order
channel_order = [
39, 37, 35, 33, 47, 45, 43, 41, 55, 53, 51,
49, 57, 63, 61, 59, 62, 60, 58, 56, 54, 52, 50,
48, 46, 44, 42, 40, 38, 36, 34, 32, 24, 26, 28,
30, 16, 18, 20, 22, 8, 10, 12, 14, 0, 2, 4, 6,
3, 5, 7, 1, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31
]
probegroup = ProbeGroup()
for i in range(16):
tetrode = generate_tetrode()
tetrode.move([i * 50, 0])
probegroup.add_probe(tetrode)
probegroup.set_global_device_channel_indices(channel_order)
recording = dataLocal.set_probegroup(probegroup)
neural_channels = [f'CH{i+1}' for i in range(64)] # Generate 'CH1' to 'CH64'
recording_neural = recording.select_channels(neural_channels)
recording_neural = pp.bandpass_filter(recording_neural, freq_min=300, freq_max=6000)
recording_neural = pp.common_reference(recording_neural, operator='median', reference='global')
#split by property
recording_by_group = recording_neural.split_by("group")
folder_name = f"Phy_TetrodeAll" # Customize folder naming
folder_path = os.path.join(datapath, folder_name)
sortings = {}
for group, sub_recording in recording_by_group.items():
sorting = sorters.run_sorter(
sorter_name='mountainsort5',
recording=sub_recording,
folder=folder_path,
remove_existing_folder=True
)
sortings[group] = sorting
@dmkobylkov thansk for sharing! Everything looks perfect, but the last part:
#split by property
recording_by_group = recording_neural.split_by("group")
folder_name = f"Phy_TetrodeAll" # Customize folder naming
folder_path = os.path.join(datapath, folder_name)
sortings = {}
for group, sub_recording in recording_by_group.items():
sorting = sorters.run_sorter(
sorter_name='mountainsort5',
recording=sub_recording,
folder=folder_path,
remove_existing_folder=True
)
sortings[group] = sorting
could be substituted entirely by this:
folder_name = f"Phy_TetrodeAll" # Customize folder naming
folder_path = os.path.join(datapath, folder_name)
sorting_with_group = sorters.run_sorter_by_property(
sorter_name=='mountainsort5',
recording=recording_neural,
grouping_property="group",
working_folder=folder_path
)
The new sorting_with_group
object now will have a group
property with the group each unit was found on.
You can then proceed and create a SortingAnalyzer
, specifying that you want to compute the sparsity by group:
analyzer = si.create_sorting_analyzer(sorting_with_group, recording_neural, sparse=True, method="by_property", by_property="group")
This will define each unit on the channels of the corresponding tetrode. Then you can continue with computing the required extensions and export to phy.
In phy, you will see a channel_group
column with the tetrode each unit belongs to!
Hope this is clear :)
Thanks a lot @alejoe91 ! Now it seems to work fine! :))
I have open ephys data, where I group channels by tetrodes and sort them independently with Mountainsort5. For some tetrodes I am not able to load results into Phy, with the following error:
I have tried to remove excess spikes, but it did not change anything. I am able to visualize the data in the spikeinterface_gui, so I assume that the problem is coming from export_to_phy function.
I have noticed a few small differences between npy files of "bad" tetrodes and "good' tetrodes. 1. When I load the exported files, in spike_times.npy and spike_clusters.npy files the last value is in square brackets []. Not sure if it is relevant or not.
I would really appreciate your help!