SpikeInterface / spikeinterface

A Python-based module for creating flexible and robust spike sorting pipelines.
https://spikeinterface.readthedocs.io
MIT License
494 stars 187 forks source link

How to sort only a segment of an Open Ephys recording #2107

Open nmtimme opened 11 months ago

nmtimme commented 11 months ago

Hello

I would appreciate any advice on how to sort only a segment of an open ephys recording. A portion of my recordings have a discontinuity in the timestamps which produces an error like #1194 . I've found that this error is caused by a short ~17 ms discontinuity ~34 ms before the end of the recording. The recording time stamps also do not start at 0, but I have other recordings that also do not start at zero and spike sort successfully (so long as they do not possess the discontinuity at the end).

I attempted to use the ignore_timestamps_errors option described in https://github.com/NeuralEnsemble/python-neo/issues/1210 and https://github.com/NeuralEnsemble/python-neo/pull/1213, but this did not work. I received a syntax error, but perhaps I am not using the most up to date version of the code. I'd rather just cut off the end of the recording in case there are other discontinuities in the middle of the recording that I should be aware of.

Is there a way in spike interface to only extract and sort a certain part of the recording, such as from time step 1 to time step end - 30000? I believe this would occur at:

raw_rec = si.read_openephys(openephys_folder,stream_name='Signals CH')

I investigated the block_index argument, but I'm not sure of the correct syntax to use. I would appreciate any advice that people could provide. I've looked around the issues and documentation and I couldn't find anything that directly addressed this point, but if I missed it, I apologize and please direct me to the correct resource. Thank you!

~Nick

zm711 commented 11 months ago

@nmtimme,

I'm trying to understand at which point the error is occurring-- at extracting from the file like in #1194 or after when trying to sort.

If the extraction is successful then you can simply slice a recording like is explained here (and I've copied below)

sub_recording = recording.frame_slice(start_frame=int(fs * 60 * 5), end_frame=int(fs * 60 * 12))

If the error is in the extraction it would be beneficial to know your version of spikeinterface and Neo to see if this a version issue. Could you post the exact error you're getting along with the version number of your packages? Thanks.

nmtimme commented 11 months ago

Hi @zm711 !

Thank you for responding so quickly!

The error is occurring at extraction. We have spike interface version 0.98.2 and neo version 0.12.0. This is the code we are running (Note, there is a lot of stuff in here related to loading a job list. The error occurs on the last line of what I've copied here.):

# Import necessary pieces of software
import spikeinterface.full as si
import probeinterface as pi
import neo
from probeinterface import Probe
from probeinterface.plotting import plot_probe
print(si.__version__)
print(neo.__version__)

import numpy as np
import matplotlib.pyplot as plt
import csv as csv
import pathlib
# from pathlib import Path

# Load the data set list
f = open('/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/cambridgeRecordings.txt', 'r')
csvreader = csv.reader(f, delimiter=',')
dataSetList =[]
for row in csvreader:
    dataSetList.append(row)
f.close()

# Iterate through each data set
for dataSet in dataSetList:

    # Set the open ephys folder path, data set name, channel map, and the location of the spike sorted data for this data set
    openephys_folder = dataSet[0]
    channelMapFile = dataSet[1]
    path = pathlib.PurePath(openephys_folder)
    dataSetName = path.name
    spikeSortedDataFolder = '/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/' + dataSetName

    print('Processing Data Set:',dataSetName)
    print('Channel Map Name:',channelMapFile)

    # Load the channel map
    f = open('/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeChannelMapFiles/' + channelMapFile + '.txt', 'r')
    reader = f.readlines()
    channelMapFileData = []
    for row in reader:
        channelMapFileData.append(row)
    f.close()
    probeName = channelMapFileData[0].strip()
    nChannels = len(channelMapFileData) - 1
    print('nChannels:',nChannels)
    device_channel_indices = np.arange(nChannels)
    for i in np.arange(1,nChannels + 1):
        device_channel_indices[i - 1] = int(channelMapFileData[i]) - 1
    print('Probe Name:',probeName)
    print('device_channel_indices:',device_channel_indices)

    # Read the ephys data (stream names are 'Signals AUX', 'Signals CH', 'Signals ADC')
    raw_rec = si.read_openephys(openephys_folder,stream_name='Signals CH')

This is the error we get:

Traceback (most recent call last):
  File "/geode2/home/u050/lapishla/BigRed200/spikeInterfaceVer1/spikeSortCambridgeData.py", line 63, in <module>
    raw_rec = si.read_openephys(openephys_folder,stream_name='Signals CH')
  File "/N/u/lapishla/BigRed200/.conda/envs/si_env/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/openephys.py", line 280, in read_openephys
    recording = OpenEphysLegacyRecordingExtractor(folder_path, **kwargs)
  File "/N/u/lapishla/BigRed200/.conda/envs/si_env/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/openephys.py", line 56, in __init__
    NeoBaseRecordingExtractor.__init__(
  File "/N/u/lapishla/BigRed200/.conda/envs/si_env/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 185, in __init__
    _NeoBaseExtractor.__init__(self, block_index, **neo_kwargs)
  File "/N/u/lapishla/BigRed200/.conda/envs/si_env/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 25, in __init__
    self.neo_reader = self.get_neo_io_reader(self.NeoRawIOClass, **neo_kwargs)
  File "/N/u/lapishla/BigRed200/.conda/envs/si_env/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 64, in get_neo_io_reader
    neo_reader.parse_header()
  File "/N/u/lapishla/BigRed200/.conda/envs/si_env/lib/python3.9/site-packages/neo/rawio/baserawio.py", line 179, in parse_header
    self._parse_header()
  File "/N/u/lapishla/BigRed200/.conda/envs/si_env/lib/python3.9/site-packages/neo/rawio/openephysrawio.py", line 118, in _parse_header
    assert np.all(diff == RECORD_SIZE), \
AssertionError: Not continuous timestamps for 100_AUX1.continuous. Maybe because recording was paused/stopped.

We've found (with the help of @jsiegle ) that the last record in the open ephys recordings is all zeros and possesses a discontinuity in the timestamps (https://github.com/open-ephys/open-ephys-python-tools/issues/25). We're hoping there is a way in spike interface to extract everything except this last part of the recording when we are loading in the open ephys data. We'd appreciate any advice you can provide. Thank you!

~Nick

zm711 commented 11 months ago

@nmtimme Hey Nick,

I see. So the potential fix for this issue neo1213 has not been included in a Neo release yet (it will be included in Neo 0.13.0 whenever it is released). So your best solution is to install Neo from source (so 0.13.0.dev).

To do this you have two options do an install with pip calling to the git repository like this (gets dev version without a local editable copy hanging around):

pip install git+https://github.com/NeuralEnsemble/python-neo.git

Or you can clone the source repository like this (with and editable copy in case you want to pull changes if more patches are made):

git clone https://github.com/NeuralEnsemble/python-neo.git
cd python-neo
pip install -e .
cd ..

Either way make sure you are in your working si_env so you don't install it in the wrong place.

Then you can run the extraction (with the ignore_missing_timestamps), then perform the splicing step (like I said above) and then you can follow the advice of https://github.com/NeuralEnsemble/python-neo/issues/1210 and save the slice as a binary recording for future reuse without having to worry about the small signal drop outs.

Does that all make sense?

Zach

PS While you're at it, it might be a good idea to have spikeinterface from source since there have been plenty of bug fixes and new features added since the 0.98.2 release.

nmtimme commented 11 months ago

Hi Zach (@zm11)!

Thank you for those suggestions. We may try that in the future, but we were able to get a solution up and running where we just trim the open ephys data (https://github.com/open-ephys/open-ephys-python-tools/issues/25). Thank you again!

~Nick

nmtimme commented 10 months ago

Hi Zach (@zm711)!

I'm reopening this issue because we've pivoted back to the method you suggested with updating neo and we're encountering a problem with sliced data in the kilosort3 container. This data set has a weird discontinuity at the end of the recording (in the last 34 ms). We also have some recordings with weird discontinuities in the first second. Therefore, we tried to cut off the first and last two seconds.

Here is the code we are running (we are running this on a cluster, so there's some code related to that functionality):

import spikeinterface.full as si
import probeinterface as pi
import neo
from probeinterface import Probe
from probeinterface.plotting import plot_probe
print('si version:',si.__version__)
print('neo version:',neo.__version__)

import numpy as np
import matplotlib.pyplot as plt
import csv as csv
import pathlib
import sys
import shutil
import os
# from pathlib import Path

# Get the task ID
taskID = int(sys.argv[1])
# taskID = 1
print('Task ID:',taskID)

# Load the data set list
f = open('/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/cambridgeRecordings.txt', 'r')
csvreader = csv.reader(f, delimiter=',')
dataSetList =[]
for row in csvreader:
    dataSetList.append(row)
f.close()

# Set the open ephys folder path, data set name, channel map, and the location of the spike sorted data for this data set
openephys_folder = dataSetList[taskID - 1][0]
channelMapFile = dataSetList[taskID - 1][1]
path = pathlib.PurePath(openephys_folder)
dataSetName = path.name
spikeSortedDataFolder = '/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/' + dataSetName
spikeSortTempFolder = '/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/' + dataSetName + '/temp'

print('Processing Data Set:',dataSetName)
print('Channel Map Name:',channelMapFile)

# Load the channel map
f = open('/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeChannelMapFiles/' + channelMapFile + '.txt', 'r')
reader = f.readlines()
channelMapFileData = []
for row in reader:
    channelMapFileData.append(row)
f.close()
probeName = channelMapFileData[0].strip()
nChannels = len(channelMapFileData) - 1
print('nChannels:',nChannels)
device_channel_indices = np.arange(nChannels)
for i in np.arange(1,nChannels + 1):
    device_channel_indices[i - 1] = int(channelMapFileData[i]) - 1
print('Probe Name:',probeName)
print('device_channel_indices:',device_channel_indices)

# Read the ephys data (stream names are 'Signals AUX', 'Signals CH', 'Signals ADC')
raw_rec = si.read_openephys(openephys_folder,stream_name='Signals CH',ignore_timestamps_errors=True)
print('raw_rec',raw_rec)

# Remove the first and last two seconds
sampling_frequency = raw_rec.sampling_frequency
num_total_samples = raw_rec.get_total_samples()
raw_rec = raw_rec.frame_slice(start_frame=int(2*sampling_frequency),end_frame=(num_total_samples - int(2*sampling_frequency)))
print('raw_rec_sliced',raw_rec)

# Create the probe, modify it, and install it with the recording
probe = pi.get_probe(manufacturer='cambridgeneurotech', probe_name=probeName)
probe.set_device_channel_indices(device_channel_indices)
raw_rec.set_probe(probe, in_place=True, group_mode="by_shank")
print('Probe:',probe)

# Plot the probe, if desired
plot_probe(probe,with_device_index=True,with_channel_index=True)
# plt.show() # Uncomment to show plot

# High-pass filter the data
rec1 = si.highpass_filter(raw_rec, freq_min=300.)

# Show the noise histogram
noise_levels_microV = si.get_noise_levels(rec1, return_scaled=True)
noise_levels_int16 = si.get_noise_levels(rec1, return_scaled=False)
fig, ax = plt.subplots()
_ = ax.hist(noise_levels_microV, bins=np.arange(5, 30, 2.5))
ax.set_xlabel('noise  [microV]')

# Identify bad channels and remove them
bad_channel_ids, channel_labels = si.detect_bad_channels(rec1,method="std",std_mad_threshold=3)
rec2 = rec1.remove_channels(bad_channel_ids)
print('bad_channel_ids', bad_channel_ids)

# Show the noise histogram
noise_levels_microV = si.get_noise_levels(rec2, return_scaled=True)
noise_levels_int16 = si.get_noise_levels(rec2, return_scaled=False)
fig, ax = plt.subplots()
_ = ax.hist(noise_levels_microV, bins=np.arange(5, 30, 2.5))
ax.set_xlabel('noise  [microV]')

# Common median reference the data
rec3 = si.common_reference(rec2, operator="median", reference="global")
print('rec',rec3)

# Show the noise histogram
noise_levels_microV = si.get_noise_levels(rec3, return_scaled=True)
noise_levels_int16 = si.get_noise_levels(rec3, return_scaled=False)
fig, ax = plt.subplots()
_ = ax.hist(noise_levels_microV, bins=np.arange(5, 30, 2.5))
ax.set_xlabel('noise  [microV]')
# plt.show() # Uncomment to show plot

# Look at some example traces
fig, ax = plt.subplots(figsize=(20, 10))
some_chans = rec3.channel_ids[[5, 10, 20, ]]
si.plot_traces({'filter':rec1, 'cmr': rec3}, backend='matplotlib', mode='line', ax=ax, channel_ids=some_chans)
# plt.show() # Uncomment to show plot

# Get the kilosort3 parameters
KS3Params = si.get_default_sorter_params('kilosort3')
print('KS3Params:',KS3Params)

# Turn off drift correction
KS3Params = {'do_correction': False}

# Increase the NT parameter to avoid EIG did not converge errors
KS3Params = {'NT': 64000}

# Print recording properties
# print('Recording properties:',rec.get_property_keys())

# Run the spike sorting with each shank sorted separately (does not work with kilosort3 for shanks with ~12 or fewer channels)
# sorting = si.run_sorter_by_property('kilosort3', rec3, grouping_property='group', working_folder=spikeSortTempFolder, singularity_image="spikeinterface/kilosort3-compiled-base:latest", verbose=True, **KS3Params)

# Run the spike sorting
sorting = si.run_sorter('kilosort3', rec3, output_folder=spikeSortTempFolder, singularity_image="spikeinterface/kilosort3-compiled-base:latest", verbose=True, **KS3Params)

Here is the output:

si version: 0.99.0.dev0
neo version: 0.13.0.dev0
Task ID: 2
Processing Data Set: 2022-02-25_09-04-11
Channel Map Name: MBEphysChannelMap1
nChannels: 64
Probe Name: ASSY-156-F
device_channel_indices: [47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24
 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
 63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48]
raw_rec OpenEphysLegacyRecordingExtractor: 64 channels - 30.0kHz - 1 segments - 378,920,960 samples 
                                   12,630.70s (3.51 hours) - int16 dtype - 45.17 GiB
raw_rec_sliced FrameSliceRecording: 64 channels - 30.0kHz - 1 segments - 378,800,960 samples 
                     12,626.70s (3.51 hours) - int16 dtype - 45.16 GiB
Probe: cambridgeneurotech - ASSY-156-F - 64ch - 6shanks
bad_channel_ids []
rec CommonReferenceRecording: 64 channels - 30.0kHz - 1 segments - 378,800,960 samples 
                          12,626.70s (3.51 hours) - int16 dtype - 45.16 GiB
KS3Params: {'detect_threshold': 6, 'projection_threshold': [9, 9], 'preclust_threshold': 8, 'car': True, 'minFR': 0.2, 'minfr_goodchannels': 0.2, 'nblocks': 5, 'sig': 20, 'freq_min': 300, 'sigmaMask': 30, 'nPCs': 3, 'ntbuff': 64, 'nfilt_factor': 4, 'do_correction': True, 'NT': None, 'AUCsplit': 0.8, 'wave_length': 61, 'keep_good_only': False, 'skip_kilosort_preprocessing': False, 'scaleproc': None, 'save_rez_to_mat': False, 'delete_tmp_files': ('matlab_files',), 'delete_recording_dat': False, 'n_jobs': 128, 'chunk_duration': '1s', 'progress_bar': True, 'mp_context': None, 'max_threads_per_process': 1}
Starting container
Installing spikeinterface from sources in spikeinterface/kilosort3-compiled-base:latest
Installing dev spikeinterface from remote repository
Installing extra requirements: ['neo']
Running kilosort3 sorter inside spikeinterface/kilosort3-compiled-base:latest
Stopping container

And here is the error:

Traceback (most recent call last):
  File "/geode2/home/u050/lapishla/BigRed200/spikeInterfaceVer2/spikeSortCambridgeDataJobScript.py", line 143, in <module>
    sorting = si.run_sorter('kilosort3', rec3, output_folder=spikeSortTempFolder, singularity_image="spikeinterface/kilosort3-compiled-base:latest", verbose=True, **KS3Params)
  File "/geode2/home/u050/lapishla/BigRed200/spikeInterfaceVer2/spikeinterface/src/spikeinterface/sorters/runsorter.py", line 142, in run_sorter
    return run_sorter_container(
  File "/geode2/home/u050/lapishla/BigRed200/spikeInterfaceVer2/spikeinterface/src/spikeinterface/sorters/runsorter.py", line 595, in run_sorter_container
    raise SpikeSortingError(f"Spike sorting in {mode} failed with the following error:\n{run_sorter_output}")
spikeinterface.sorters.utils.misc.SpikeSortingError: Spike sorting in singularity failed with the following error:
Traceback (most recent call last):
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_sorter_script.py", line 9, in <module>
    recording = load_extractor('/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_recording.json')
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/core/base.py", line 1078, in load_extractor
    return BaseExtractor.load(file_or_folder_or_dict, base_folder=base_folder)
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/core/base.py", line 670, in load
    extractor = BaseExtractor.from_dict(d, base_folder=base_folder)
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/core/base.py", line 435, in from_dict
    extractor = _load_extractor_from_dict(dictionary)
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/core/base.py", line 998, in _load_extractor_from_dict
    new_kwargs[name] = _load_extractor_from_dict(value)
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/core/base.py", line 998, in _load_extractor_from_dict
    new_kwargs[name] = _load_extractor_from_dict(value)
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/core/base.py", line 998, in _load_extractor_from_dict
    new_kwargs[name] = _load_extractor_from_dict(value)
  [Previous line repeated 1 more time]
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/core/base.py", line 1017, in _load_extractor_from_dict
    extractor = extractor_class(**new_kwargs)
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/openephys.py", line 79, in __init__
    NeoBaseRecordingExtractor.__init__(
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 185, in __init__
    _NeoBaseExtractor.__init__(self, block_index, **neo_kwargs)
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 25, in __init__
    self.neo_reader = self.get_neo_io_reader(self.NeoRawIOClass, **neo_kwargs)
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 64, in get_neo_io_reader
    neo_reader.parse_header()
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/neo/rawio/baserawio.py", line 178, in parse_header
    self._parse_header()
  File "/N/project/lapishLabWorkspace/SpikeInterfaceSpikeSorting/CambridgeProbeSpikeSortingResults/2022-02-25_09-04-11/in_container_python_base/lib/python3.9/site-packages/neo/rawio/openephysrawio.py", line 120, in _parse_header
    raise ValueError(
ValueError: Not continuous timestamps for 100_AUX1.continuous. Maybe because recording was paused/stopped.

It seems like the container isn't properly using the sliced recording and that it is attempting to reload the recording. I'd appreciate any thoughts people might have on this issue. Thank you!

~Nick

zm711 commented 10 months ago

@nmtimme, I honest don't know the containers very well, I'll loop in @alejoe91 at this point so he can double check.

alejoe91 commented 10 months ago

Hi @nmtimme

@zm711 thanks for helping with this!

So the problem seems to be related to the version of NEO that is installed in the container. If your spikeinterface version is from source (e.g., si.__version__ has a dev in it), then NEO will be installed from its master branch and the problem should be solved. If you are using a release version of SI, for example 0.98.2, NEO will be installed from pip and we have to wait for a new NEO release.

So I recommend installing SpikeInterface from source. See instructions here: https://spikeinterface.readthedocs.io/en/latest/installation.html#from-source

nmtimme commented 10 months ago

Hi @alejoe91 !

Thanks for your quick response! When we start up the code, we print out the versions of si and neo and we get:

si version: 0.99.0.dev0 neo version: 0.13.0.dev0

So, it seems like we're using the proper versions of si and neo, right? I deleted the kilosort3-compiled-base:latest.sif and reran the analysis to make sure we downloaded the container with the most recent version of si, but the error persists. Is there a way to print the version of neo being used in the container? All it reports back is:

Installing extra requirements: ['neo']

I'd appreciate any other suggestions you can provide. Thank you!

~Nick

zm711 commented 10 months ago

@nmtimme,

There's a release prep happening right now, but I'll make sure this doesn't fall off the radar. I also linked an issue on the Neo side about some issues with dropped signals in case you want to read about this (if you're planning to use a lot of openephys through spikeinterface/neo).

zm711 commented 10 months ago

@alejoe91, when you get a moment. Can you look at this one again? There was an additional issue with the container that @nmtimme was asking and I promised I would ping you back in after the release.

alejoe91 commented 10 months ago

@nmtimme can you paste here the output of:

print(raw_rec._kwargs)

I think we might not be correctly saving the ignore_timestamps_error internally

alejoe91 commented 10 months ago

@nmtimme actually I think I found the issue. Can you install from this PR? https://github.com/SpikeInterface/spikeinterface/pull/2174

nmtimme commented 10 months ago

@alejoe91 Thanks for this suggestion! However, I'm not sure how to correctly install from that PR. I had been installing spike interface with a yml file to create a conda environment. I'd appreciate any guidance on how you can do this.

Also, we were able to deal with cutting out a chunk of an open ephys recording using suggestions here: https://github.com/open-ephys/open-ephys-python-tools/issues/26#issuecomment-1775892824. So, this isn't a high priority fix for us, though I'm sure others would appreciate it.

Thank you!

~Nick

claudio1212 commented 9 months ago

Hi, I'm having the exact same problem as this user. I'm trying to sort an OE legacy recording (one .continuous file per recording) through the last version of spikeinterface, but it returns the same error ad in this thread even using the "ignore timestamp error=true" flag in read_open_ephys. I'm pretty sure there are no gaps in my recording, given that I've checked it through Neuroexplorer.

zm711 commented 5 months ago

We've added more updates to the OE legacy reader in Neo including getting rid of ignore_timestamp_error and instead have a fill option. Feel free to test and let us know if that works!