Closed Antho2422 closed 5 days ago
That is interesting. What if you do len(sorting_analyzer.sorting.to_spike_vector())
just to make sure it's the same data :)
I get 71644 as expected. So this means the sorting attached to the sorting analyzer is not exactly the same as the sorting object I got from the sorting... Any idea of what could explain that ?
The sorting I loaded previously was the sorting directly saved from the sorting process using SC2. I did not save it manually using sorting.save()
I don't remember but we may have automatically incorporated a cleaning stuff for excess spikes in the analyzer. Let me check.
It doesn't look like we auto-cleaned. Maybe @alejoe91 or @samuelgarcia would be better at knowing this?
@Antho2422 did you do some merging? Are you 100% sure that the sorting objec tis the same you fet to the sorting analyzer?
@alejoe91 @zm711 that is my bad sorry
I am in fact doing a curation process of removing the duplicated spikes... I forgot about that. But could you just quickly remind me what the function sorting = remove_duplicated_spikes(sorting)
does ? What do you call 'duplicated_spikes' ?
Duplicated spikes are the ones that occur in a very short period after a previous spike, so basically the ones that fall within the refractory period
okay thank you !
Hello,
I have a question about an inconsistency I found in my data between the number of spikes contained in the spike vector and the spike amplitudes array.
len(sorting.to_spike_vector()) = 71751
.len(sorting_analyzer.load_extension('spike_amplitudes').get_data()) = 71644
.It seems like there are some spikes (107) where the amplitude is not computed for some reason. At first I was thinking that maybe that was due to the random spikes extensions where the parameter
max_spikes_per_unit
is set at 500. But if it was the case I should have 7500 spikes in my spike_amplitudes vector since I have 15 units in my sorting object right ?Can someone help me to understand this ?
Thanks in advance, Anthony