SpikeInterface / spikeinterface

A Python-based module for creating flexible and robust spike sorting pipelines.
https://spikeinterface.readthedocs.io
MIT License
527 stars 187 forks source link

Sorting analyzer does not propagate sparsity to template extension #2877

Open h-mayorquin opened 6 months ago

h-mayorquin commented 6 months ago
from spikeinterface.core import create_sorting_analyzer

from spikeinterface.core import generate_ground_truth_recording

job_kwargs = dict(
    n_jobs=1,
    progress_bar=False,
    verbose=False,
    chunk_duration=1.0,
)

recording, sorting = generate_ground_truth_recording(num_channels=384, durations=[10], seed=0)

analyzer = create_sorting_analyzer(sorting, recording, sparse=True, format="memory", **job_kwargs)

random_spike_parameters = {
    "method": "all",
}

template_extension_parameters = {
    "ms_before": 3.0,
    "ms_after": 5.0,
    "operators": ["average"],
}

extensions = {
    "random_spikes": random_spike_parameters,
    "templates": template_extension_parameters,
}

analyzer.compute_several_extensions(
    extensions=extensions,
    **job_kwargs
)

template = analyzer.get_extension("templates").get_data(outputs="Templates")
template.sparsity
template.are_templates_sparse()

Outputs False

h-mayorquin commented 6 months ago

It seems to be the case also even if computing through the waveforms extension:


from spikeinterface.core import create_sorting_analyzer

from spikeinterface.core import generate_ground_truth_recording

job_kwargs = dict(
    n_jobs=1,
    progress_bar=True,
    verbose=True,
    chunk_duration=1.0,
)

recording, sorting = generate_ground_truth_recording(num_channels=384, durations=[10], seed=0)

analyzer = create_sorting_analyzer(sorting, recording, sparse=True, format="memory", **job_kwargs)

random_spike_parameters = {
    "method": "all",
}

waveform_extension_parameters = {
    "ms_before": 3.0,
    "ms_after": 5.0,
}

template_extension_parameters = {
    "ms_before": 3.0,
    "ms_after": 5.0,
    "operators": ["average"],
}

extensions = {
    "random_spikes": random_spike_parameters,
    "waveforms": waveform_extension_parameters,
    "templates": template_extension_parameters,
}

analyzer.compute_several_extensions(
    extensions=extensions,
    **job_kwargs
)

print(analyzer.get_extension("waveforms").get_data().shape)
templates = analyzer.get_extension("templates").get_data(outputs="Templates")
templates.are_templates_sparse()

Output:

(1471, 200, 20)  # So waveforms are sparse
False
samuelgarcia commented 5 months ago

Yes I knew. This is because estimate_templates_with_accumulator() is not sparse. At the moment template are represented dense always internally, even with sparse we put zeros. We should change this but this was to make it compatible with the waveforms extractor.